00:00:00.001 Started by upstream project "autotest-per-patch" build number 132337 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.127 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.128 The recommended git tool is: git 00:00:00.128 using credential 00000000-0000-0000-0000-000000000002 00:00:00.130 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.168 Fetching changes from the remote Git repository 00:00:00.170 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.215 Using shallow fetch with depth 1 00:00:00.215 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.215 > git --version # timeout=10 00:00:00.252 > git --version # 'git version 2.39.2' 00:00:00.252 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.274 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.274 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.163 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.173 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.184 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.184 > git config core.sparsecheckout # timeout=10 00:00:05.194 > git read-tree -mu HEAD # timeout=10 00:00:05.209 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.236 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.236 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.341 [Pipeline] Start of Pipeline 00:00:05.354 [Pipeline] library 00:00:05.355 Loading library shm_lib@master 00:00:05.355 Library shm_lib@master is cached. Copying from home. 00:00:05.371 [Pipeline] node 00:00:05.382 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:00:05.383 [Pipeline] { 00:00:05.394 [Pipeline] catchError 00:00:05.397 [Pipeline] { 00:00:05.410 [Pipeline] wrap 00:00:05.418 [Pipeline] { 00:00:05.426 [Pipeline] stage 00:00:05.428 [Pipeline] { (Prologue) 00:00:05.445 [Pipeline] echo 00:00:05.446 Node: VM-host-SM9 00:00:05.452 [Pipeline] cleanWs 00:00:05.460 [WS-CLEANUP] Deleting project workspace... 00:00:05.460 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.466 [WS-CLEANUP] done 00:00:05.645 [Pipeline] setCustomBuildProperty 00:00:05.733 [Pipeline] httpRequest 00:00:06.053 [Pipeline] echo 00:00:06.055 Sorcerer 10.211.164.20 is alive 00:00:06.061 [Pipeline] retry 00:00:06.062 [Pipeline] { 00:00:06.072 [Pipeline] httpRequest 00:00:06.080 HttpMethod: GET 00:00:06.082 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.088 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.132 Response Code: HTTP/1.1 200 OK 00:00:06.133 Success: Status code 200 is in the accepted range: 200,404 00:00:06.133 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.083 [Pipeline] } 00:00:07.098 [Pipeline] // retry 00:00:07.103 [Pipeline] sh 00:00:07.429 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.444 [Pipeline] httpRequest 00:00:07.805 [Pipeline] echo 00:00:07.807 Sorcerer 10.211.164.20 is alive 00:00:07.815 [Pipeline] retry 00:00:07.817 [Pipeline] { 00:00:07.828 [Pipeline] httpRequest 00:00:07.832 HttpMethod: GET 00:00:07.832 URL: http://10.211.164.20/packages/spdk_866ba5ffee15fb6c7a9a3f3b75e75af0ee97439f.tar.gz 00:00:07.833 Sending request to url: http://10.211.164.20/packages/spdk_866ba5ffee15fb6c7a9a3f3b75e75af0ee97439f.tar.gz 00:00:07.848 Response Code: HTTP/1.1 200 OK 00:00:07.849 Success: Status code 200 is in the accepted range: 200,404 00:00:07.849 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk_866ba5ffee15fb6c7a9a3f3b75e75af0ee97439f.tar.gz 00:04:47.225 [Pipeline] } 00:04:47.243 [Pipeline] // retry 00:04:47.250 [Pipeline] sh 00:04:47.532 + tar --no-same-owner -xf spdk_866ba5ffee15fb6c7a9a3f3b75e75af0ee97439f.tar.gz 00:04:51.733 [Pipeline] sh 00:04:52.015 + git -C spdk log --oneline -n5 00:04:52.015 866ba5ffe bdev: Factor out checking bounce buffer necessity into helper function 00:04:52.015 57b682926 bdev: Add spdk_dif_ctx and spdk_dif_error into spdk_bdev_io 00:04:52.015 3b58329b1 bdev: Use data_block_size for upper layer buffer if no_metadata is true 00:04:52.015 9b64b1304 bdev: Add APIs get metadata config via desc depending on hide_metadata option 00:04:52.015 95f6a056e bdev: Add spdk_bdev_open_ext_v2() to support per-open options 00:04:52.035 [Pipeline] writeFile 00:04:52.050 [Pipeline] sh 00:04:52.330 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:04:52.342 [Pipeline] sh 00:04:52.644 + cat autorun-spdk.conf 00:04:52.644 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:52.644 SPDK_TEST_NVMF=1 00:04:52.644 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:52.644 SPDK_TEST_URING=1 00:04:52.644 SPDK_TEST_USDT=1 00:04:52.644 SPDK_RUN_UBSAN=1 00:04:52.644 NET_TYPE=virt 00:04:52.644 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:52.688 RUN_NIGHTLY=0 00:04:52.690 [Pipeline] } 00:04:52.704 [Pipeline] // stage 00:04:52.721 [Pipeline] stage 00:04:52.723 [Pipeline] { (Run VM) 00:04:52.736 [Pipeline] sh 00:04:53.016 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:04:53.016 + echo 'Start stage prepare_nvme.sh' 00:04:53.016 Start stage prepare_nvme.sh 00:04:53.016 + [[ -n 5 ]] 00:04:53.016 + disk_prefix=ex5 00:04:53.016 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 ]] 00:04:53.016 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf ]] 00:04:53.016 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf 00:04:53.016 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:53.016 ++ SPDK_TEST_NVMF=1 00:04:53.016 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:53.016 ++ SPDK_TEST_URING=1 00:04:53.016 ++ SPDK_TEST_USDT=1 00:04:53.016 ++ SPDK_RUN_UBSAN=1 00:04:53.016 ++ NET_TYPE=virt 00:04:53.016 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:53.016 ++ RUN_NIGHTLY=0 00:04:53.016 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:04:53.016 + nvme_files=() 00:04:53.016 + declare -A nvme_files 00:04:53.016 + backend_dir=/var/lib/libvirt/images/backends 00:04:53.016 + nvme_files['nvme.img']=5G 00:04:53.016 + nvme_files['nvme-cmb.img']=5G 00:04:53.016 + nvme_files['nvme-multi0.img']=4G 00:04:53.016 + nvme_files['nvme-multi1.img']=4G 00:04:53.016 + nvme_files['nvme-multi2.img']=4G 00:04:53.016 + nvme_files['nvme-openstack.img']=8G 00:04:53.016 + nvme_files['nvme-zns.img']=5G 00:04:53.016 + (( SPDK_TEST_NVME_PMR == 1 )) 00:04:53.016 + (( SPDK_TEST_FTL == 1 )) 00:04:53.016 + (( SPDK_TEST_NVME_FDP == 1 )) 00:04:53.016 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:04:53.016 + for nvme in "${!nvme_files[@]}" 00:04:53.016 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:04:53.016 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:04:53.016 + for nvme in "${!nvme_files[@]}" 00:04:53.016 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:04:53.016 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:04:53.016 + for nvme in "${!nvme_files[@]}" 00:04:53.016 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:04:53.016 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:04:53.016 + for nvme in "${!nvme_files[@]}" 00:04:53.016 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:04:53.016 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:04:53.016 + for nvme in "${!nvme_files[@]}" 00:04:53.016 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:04:53.276 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:04:53.276 + for nvme in "${!nvme_files[@]}" 00:04:53.276 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:04:53.276 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:04:53.276 + for nvme in "${!nvme_files[@]}" 00:04:53.276 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:04:53.276 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:04:53.276 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:04:53.276 + echo 'End stage prepare_nvme.sh' 00:04:53.276 End stage prepare_nvme.sh 00:04:53.289 [Pipeline] sh 00:04:53.570 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:04:53.570 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:04:53.570 00:04:53.570 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant 00:04:53.570 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk 00:04:53.570 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:04:53.570 HELP=0 00:04:53.570 DRY_RUN=0 00:04:53.570 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:04:53.570 NVME_DISKS_TYPE=nvme,nvme, 00:04:53.570 NVME_AUTO_CREATE=0 00:04:53.570 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:04:53.570 NVME_CMB=,, 00:04:53.570 NVME_PMR=,, 00:04:53.570 NVME_ZNS=,, 00:04:53.570 NVME_MS=,, 00:04:53.570 NVME_FDP=,, 00:04:53.570 SPDK_VAGRANT_DISTRO=fedora39 00:04:53.570 SPDK_VAGRANT_VMCPU=10 00:04:53.570 SPDK_VAGRANT_VMRAM=12288 00:04:53.570 SPDK_VAGRANT_PROVIDER=libvirt 00:04:53.570 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:04:53.570 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:04:53.570 SPDK_OPENSTACK_NETWORK=0 00:04:53.570 VAGRANT_PACKAGE_BOX=0 00:04:53.570 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:04:53.570 FORCE_DISTRO=true 00:04:53.570 VAGRANT_BOX_VERSION= 00:04:53.570 EXTRA_VAGRANTFILES= 00:04:53.570 NIC_MODEL=e1000 00:04:53.570 00:04:53.570 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt' 00:04:53.570 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:04:57.759 Bringing machine 'default' up with 'libvirt' provider... 00:04:57.759 ==> default: Creating image (snapshot of base box volume). 00:04:58.018 ==> default: Creating domain with the following settings... 00:04:58.018 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732079712_93cc84f9d303a4738b75 00:04:58.018 ==> default: -- Domain type: kvm 00:04:58.018 ==> default: -- Cpus: 10 00:04:58.018 ==> default: -- Feature: acpi 00:04:58.018 ==> default: -- Feature: apic 00:04:58.018 ==> default: -- Feature: pae 00:04:58.018 ==> default: -- Memory: 12288M 00:04:58.018 ==> default: -- Memory Backing: hugepages: 00:04:58.018 ==> default: -- Management MAC: 00:04:58.018 ==> default: -- Loader: 00:04:58.018 ==> default: -- Nvram: 00:04:58.018 ==> default: -- Base box: spdk/fedora39 00:04:58.018 ==> default: -- Storage pool: default 00:04:58.018 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732079712_93cc84f9d303a4738b75.img (20G) 00:04:58.018 ==> default: -- Volume Cache: default 00:04:58.018 ==> default: -- Kernel: 00:04:58.018 ==> default: -- Initrd: 00:04:58.018 ==> default: -- Graphics Type: vnc 00:04:58.018 ==> default: -- Graphics Port: -1 00:04:58.018 ==> default: -- Graphics IP: 127.0.0.1 00:04:58.018 ==> default: -- Graphics Password: Not defined 00:04:58.018 ==> default: -- Video Type: cirrus 00:04:58.018 ==> default: -- Video VRAM: 9216 00:04:58.018 ==> default: -- Sound Type: 00:04:58.018 ==> default: -- Keymap: en-us 00:04:58.018 ==> default: -- TPM Path: 00:04:58.018 ==> default: -- INPUT: type=mouse, bus=ps2 00:04:58.018 ==> default: -- Command line args: 00:04:58.018 ==> default: -> value=-device, 00:04:58.018 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:04:58.018 ==> default: -> value=-drive, 00:04:58.018 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:04:58.018 ==> default: -> value=-device, 00:04:58.018 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:58.018 ==> default: -> value=-device, 00:04:58.018 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:04:58.018 ==> default: -> value=-drive, 00:04:58.018 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:04:58.018 ==> default: -> value=-device, 00:04:58.018 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:58.018 ==> default: -> value=-drive, 00:04:58.018 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:04:58.018 ==> default: -> value=-device, 00:04:58.018 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:58.018 ==> default: -> value=-drive, 00:04:58.018 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:04:58.018 ==> default: -> value=-device, 00:04:58.018 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:58.018 ==> default: Creating shared folders metadata... 00:04:58.018 ==> default: Starting domain. 00:04:59.398 ==> default: Waiting for domain to get an IP address... 00:05:17.483 ==> default: Waiting for SSH to become available... 00:05:17.483 ==> default: Configuring and enabling network interfaces... 00:05:20.019 default: SSH address: 192.168.121.13:22 00:05:20.019 default: SSH username: vagrant 00:05:20.019 default: SSH auth method: private key 00:05:22.554 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:05:30.672 ==> default: Mounting SSHFS shared folder... 00:05:31.634 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:05:31.634 ==> default: Checking Mount.. 00:05:33.029 ==> default: Folder Successfully Mounted! 00:05:33.029 ==> default: Running provisioner: file... 00:05:33.597 default: ~/.gitconfig => .gitconfig 00:05:33.856 00:05:33.856 SUCCESS! 00:05:33.856 00:05:33.856 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:05:33.856 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:05:33.856 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:05:33.856 00:05:33.865 [Pipeline] } 00:05:33.880 [Pipeline] // stage 00:05:33.889 [Pipeline] dir 00:05:33.889 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt 00:05:33.891 [Pipeline] { 00:05:33.903 [Pipeline] catchError 00:05:33.905 [Pipeline] { 00:05:33.917 [Pipeline] sh 00:05:34.198 + vagrant ssh-config --host vagrant 00:05:34.198 + sed -ne /^Host/,$p 00:05:34.198 + tee ssh_conf 00:05:37.492 Host vagrant 00:05:37.492 HostName 192.168.121.13 00:05:37.492 User vagrant 00:05:37.492 Port 22 00:05:37.492 UserKnownHostsFile /dev/null 00:05:37.492 StrictHostKeyChecking no 00:05:37.492 PasswordAuthentication no 00:05:37.492 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:05:37.492 IdentitiesOnly yes 00:05:37.492 LogLevel FATAL 00:05:37.492 ForwardAgent yes 00:05:37.492 ForwardX11 yes 00:05:37.492 00:05:37.507 [Pipeline] withEnv 00:05:37.510 [Pipeline] { 00:05:37.521 [Pipeline] sh 00:05:37.799 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:05:37.799 source /etc/os-release 00:05:37.799 [[ -e /image.version ]] && img=$(< /image.version) 00:05:37.799 # Minimal, systemd-like check. 00:05:37.799 if [[ -e /.dockerenv ]]; then 00:05:37.799 # Clear garbage from the node's name: 00:05:37.799 # agt-er_autotest_547-896 -> autotest_547-896 00:05:37.799 # $HOSTNAME is the actual container id 00:05:37.799 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:05:37.799 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:05:37.799 # We can assume this is a mount from a host where container is running, 00:05:37.799 # so fetch its hostname to easily identify the target swarm worker. 00:05:37.799 container="$(< /etc/hostname) ($agent)" 00:05:37.799 else 00:05:37.799 # Fallback 00:05:37.799 container=$agent 00:05:37.799 fi 00:05:37.799 fi 00:05:37.799 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:05:37.799 00:05:38.070 [Pipeline] } 00:05:38.086 [Pipeline] // withEnv 00:05:38.095 [Pipeline] setCustomBuildProperty 00:05:38.109 [Pipeline] stage 00:05:38.111 [Pipeline] { (Tests) 00:05:38.127 [Pipeline] sh 00:05:38.408 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:05:38.681 [Pipeline] sh 00:05:38.961 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:05:39.233 [Pipeline] timeout 00:05:39.234 Timeout set to expire in 1 hr 0 min 00:05:39.235 [Pipeline] { 00:05:39.246 [Pipeline] sh 00:05:39.530 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:05:40.096 HEAD is now at 866ba5ffe bdev: Factor out checking bounce buffer necessity into helper function 00:05:40.107 [Pipeline] sh 00:05:40.385 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:05:40.657 [Pipeline] sh 00:05:40.936 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:05:41.210 [Pipeline] sh 00:05:41.491 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:05:41.491 ++ readlink -f spdk_repo 00:05:41.491 + DIR_ROOT=/home/vagrant/spdk_repo 00:05:41.491 + [[ -n /home/vagrant/spdk_repo ]] 00:05:41.491 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:05:41.491 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:05:41.491 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:05:41.491 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:05:41.491 + [[ -d /home/vagrant/spdk_repo/output ]] 00:05:41.491 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:05:41.491 + cd /home/vagrant/spdk_repo 00:05:41.491 + source /etc/os-release 00:05:41.491 ++ NAME='Fedora Linux' 00:05:41.491 ++ VERSION='39 (Cloud Edition)' 00:05:41.491 ++ ID=fedora 00:05:41.491 ++ VERSION_ID=39 00:05:41.491 ++ VERSION_CODENAME= 00:05:41.491 ++ PLATFORM_ID=platform:f39 00:05:41.491 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:05:41.491 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:41.491 ++ LOGO=fedora-logo-icon 00:05:41.491 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:05:41.491 ++ HOME_URL=https://fedoraproject.org/ 00:05:41.491 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:05:41.491 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:41.491 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:41.491 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:41.491 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:05:41.491 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:41.491 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:05:41.491 ++ SUPPORT_END=2024-11-12 00:05:41.491 ++ VARIANT='Cloud Edition' 00:05:41.491 ++ VARIANT_ID=cloud 00:05:41.491 + uname -a 00:05:41.491 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:05:41.491 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:42.060 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:42.060 Hugepages 00:05:42.060 node hugesize free / total 00:05:42.060 node0 1048576kB 0 / 0 00:05:42.060 node0 2048kB 0 / 0 00:05:42.060 00:05:42.060 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:42.060 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:42.060 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:42.060 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:42.319 + rm -f /tmp/spdk-ld-path 00:05:42.319 + source autorun-spdk.conf 00:05:42.319 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:42.319 ++ SPDK_TEST_NVMF=1 00:05:42.319 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:42.319 ++ SPDK_TEST_URING=1 00:05:42.319 ++ SPDK_TEST_USDT=1 00:05:42.319 ++ SPDK_RUN_UBSAN=1 00:05:42.319 ++ NET_TYPE=virt 00:05:42.319 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:42.319 ++ RUN_NIGHTLY=0 00:05:42.319 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:42.319 + [[ -n '' ]] 00:05:42.319 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:05:42.319 + for M in /var/spdk/build-*-manifest.txt 00:05:42.319 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:05:42.319 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:42.319 + for M in /var/spdk/build-*-manifest.txt 00:05:42.319 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:42.319 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:42.319 + for M in /var/spdk/build-*-manifest.txt 00:05:42.319 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:42.319 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:42.319 ++ uname 00:05:42.319 + [[ Linux == \L\i\n\u\x ]] 00:05:42.319 + sudo dmesg -T 00:05:42.319 + sudo dmesg --clear 00:05:42.319 + dmesg_pid=5258 00:05:42.319 + [[ Fedora Linux == FreeBSD ]] 00:05:42.319 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:42.319 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:42.319 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:42.319 + [[ -x /usr/src/fio-static/fio ]] 00:05:42.319 + sudo dmesg -Tw 00:05:42.319 + export FIO_BIN=/usr/src/fio-static/fio 00:05:42.319 + FIO_BIN=/usr/src/fio-static/fio 00:05:42.319 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:42.319 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:42.319 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:42.319 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:42.319 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:42.319 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:42.319 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:42.319 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:42.319 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:42.319 05:15:56 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:05:42.319 05:15:56 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:42.319 05:15:56 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:42.319 05:15:56 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:05:42.319 05:15:56 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:42.319 05:15:56 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:05:42.319 05:15:56 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:05:42.320 05:15:56 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:05:42.320 05:15:56 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:05:42.320 05:15:56 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:42.320 05:15:56 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:05:42.320 05:15:56 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:05:42.320 05:15:56 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:42.579 05:15:56 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:05:42.579 05:15:56 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:42.579 05:15:56 -- scripts/common.sh@15 -- $ shopt -s extglob 00:05:42.579 05:15:56 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:42.579 05:15:56 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:42.579 05:15:56 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:42.579 05:15:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.579 05:15:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.579 05:15:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.579 05:15:56 -- paths/export.sh@5 -- $ export PATH 00:05:42.579 05:15:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.579 05:15:56 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:05:42.579 05:15:56 -- common/autobuild_common.sh@486 -- $ date +%s 00:05:42.579 05:15:56 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732079756.XXXXXX 00:05:42.579 05:15:56 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732079756.0cf6c0 00:05:42.579 05:15:56 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:05:42.579 05:15:56 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:05:42.579 05:15:56 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:05:42.579 05:15:56 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:05:42.579 05:15:56 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:05:42.579 05:15:56 -- common/autobuild_common.sh@502 -- $ get_config_params 00:05:42.579 05:15:56 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:05:42.579 05:15:56 -- common/autotest_common.sh@10 -- $ set +x 00:05:42.579 05:15:56 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:05:42.579 05:15:56 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:05:42.579 05:15:56 -- pm/common@17 -- $ local monitor 00:05:42.579 05:15:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:42.579 05:15:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:42.579 05:15:56 -- pm/common@25 -- $ sleep 1 00:05:42.579 05:15:56 -- pm/common@21 -- $ date +%s 00:05:42.579 05:15:56 -- pm/common@21 -- $ date +%s 00:05:42.579 05:15:56 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732079756 00:05:42.579 05:15:56 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732079756 00:05:42.579 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732079756_collect-cpu-load.pm.log 00:05:42.579 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732079756_collect-vmstat.pm.log 00:05:43.518 05:15:57 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:05:43.518 05:15:57 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:43.518 05:15:57 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:43.518 05:15:57 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:05:43.518 05:15:57 -- spdk/autobuild.sh@16 -- $ date -u 00:05:43.518 Wed Nov 20 05:15:57 AM UTC 2024 00:05:43.518 05:15:57 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:43.518 v25.01-pre-193-g866ba5ffe 00:05:43.518 05:15:57 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:05:43.518 05:15:57 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:43.518 05:15:57 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:43.518 05:15:57 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:05:43.518 05:15:57 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:05:43.518 05:15:57 -- common/autotest_common.sh@10 -- $ set +x 00:05:43.518 ************************************ 00:05:43.518 START TEST ubsan 00:05:43.518 ************************************ 00:05:43.518 using ubsan 00:05:43.518 05:15:57 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:05:43.518 00:05:43.518 real 0m0.000s 00:05:43.518 user 0m0.000s 00:05:43.518 sys 0m0.000s 00:05:43.518 05:15:57 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:05:43.518 ************************************ 00:05:43.518 END TEST ubsan 00:05:43.518 05:15:57 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:43.518 ************************************ 00:05:43.518 05:15:57 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:43.518 05:15:57 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:43.518 05:15:57 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:43.518 05:15:57 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:43.518 05:15:57 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:43.518 05:15:57 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:43.518 05:15:57 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:43.518 05:15:57 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:43.518 05:15:57 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:05:43.777 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:43.777 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:44.036 Using 'verbs' RDMA provider 00:05:59.898 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:06:12.106 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:06:12.106 Creating mk/config.mk...done. 00:06:12.106 Creating mk/cc.flags.mk...done. 00:06:12.106 Type 'make' to build. 00:06:12.106 05:16:25 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:06:12.106 05:16:25 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:06:12.106 05:16:25 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:06:12.106 05:16:25 -- common/autotest_common.sh@10 -- $ set +x 00:06:12.106 ************************************ 00:06:12.106 START TEST make 00:06:12.106 ************************************ 00:06:12.106 05:16:25 make -- common/autotest_common.sh@1127 -- $ make -j10 00:06:12.106 make[1]: Nothing to be done for 'all'. 00:06:24.305 The Meson build system 00:06:24.305 Version: 1.5.0 00:06:24.305 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:06:24.305 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:06:24.305 Build type: native build 00:06:24.305 Program cat found: YES (/usr/bin/cat) 00:06:24.305 Project name: DPDK 00:06:24.305 Project version: 24.03.0 00:06:24.305 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:24.305 C linker for the host machine: cc ld.bfd 2.40-14 00:06:24.305 Host machine cpu family: x86_64 00:06:24.305 Host machine cpu: x86_64 00:06:24.305 Message: ## Building in Developer Mode ## 00:06:24.305 Program pkg-config found: YES (/usr/bin/pkg-config) 00:06:24.305 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:06:24.305 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:06:24.305 Program python3 found: YES (/usr/bin/python3) 00:06:24.305 Program cat found: YES (/usr/bin/cat) 00:06:24.305 Compiler for C supports arguments -march=native: YES 00:06:24.305 Checking for size of "void *" : 8 00:06:24.305 Checking for size of "void *" : 8 (cached) 00:06:24.305 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:06:24.305 Library m found: YES 00:06:24.305 Library numa found: YES 00:06:24.305 Has header "numaif.h" : YES 00:06:24.305 Library fdt found: NO 00:06:24.305 Library execinfo found: NO 00:06:24.305 Has header "execinfo.h" : YES 00:06:24.305 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:24.305 Run-time dependency libarchive found: NO (tried pkgconfig) 00:06:24.305 Run-time dependency libbsd found: NO (tried pkgconfig) 00:06:24.305 Run-time dependency jansson found: NO (tried pkgconfig) 00:06:24.305 Run-time dependency openssl found: YES 3.1.1 00:06:24.305 Run-time dependency libpcap found: YES 1.10.4 00:06:24.305 Has header "pcap.h" with dependency libpcap: YES 00:06:24.305 Compiler for C supports arguments -Wcast-qual: YES 00:06:24.305 Compiler for C supports arguments -Wdeprecated: YES 00:06:24.305 Compiler for C supports arguments -Wformat: YES 00:06:24.305 Compiler for C supports arguments -Wformat-nonliteral: NO 00:06:24.305 Compiler for C supports arguments -Wformat-security: NO 00:06:24.305 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:24.305 Compiler for C supports arguments -Wmissing-prototypes: YES 00:06:24.305 Compiler for C supports arguments -Wnested-externs: YES 00:06:24.305 Compiler for C supports arguments -Wold-style-definition: YES 00:06:24.305 Compiler for C supports arguments -Wpointer-arith: YES 00:06:24.305 Compiler for C supports arguments -Wsign-compare: YES 00:06:24.305 Compiler for C supports arguments -Wstrict-prototypes: YES 00:06:24.305 Compiler for C supports arguments -Wundef: YES 00:06:24.305 Compiler for C supports arguments -Wwrite-strings: YES 00:06:24.305 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:06:24.305 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:06:24.305 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:24.305 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:06:24.305 Program objdump found: YES (/usr/bin/objdump) 00:06:24.305 Compiler for C supports arguments -mavx512f: YES 00:06:24.305 Checking if "AVX512 checking" compiles: YES 00:06:24.305 Fetching value of define "__SSE4_2__" : 1 00:06:24.305 Fetching value of define "__AES__" : 1 00:06:24.305 Fetching value of define "__AVX__" : 1 00:06:24.305 Fetching value of define "__AVX2__" : 1 00:06:24.305 Fetching value of define "__AVX512BW__" : (undefined) 00:06:24.305 Fetching value of define "__AVX512CD__" : (undefined) 00:06:24.305 Fetching value of define "__AVX512DQ__" : (undefined) 00:06:24.305 Fetching value of define "__AVX512F__" : (undefined) 00:06:24.305 Fetching value of define "__AVX512VL__" : (undefined) 00:06:24.305 Fetching value of define "__PCLMUL__" : 1 00:06:24.305 Fetching value of define "__RDRND__" : 1 00:06:24.305 Fetching value of define "__RDSEED__" : 1 00:06:24.305 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:06:24.305 Fetching value of define "__znver1__" : (undefined) 00:06:24.305 Fetching value of define "__znver2__" : (undefined) 00:06:24.305 Fetching value of define "__znver3__" : (undefined) 00:06:24.305 Fetching value of define "__znver4__" : (undefined) 00:06:24.305 Compiler for C supports arguments -Wno-format-truncation: YES 00:06:24.305 Message: lib/log: Defining dependency "log" 00:06:24.305 Message: lib/kvargs: Defining dependency "kvargs" 00:06:24.305 Message: lib/telemetry: Defining dependency "telemetry" 00:06:24.305 Checking for function "getentropy" : NO 00:06:24.305 Message: lib/eal: Defining dependency "eal" 00:06:24.305 Message: lib/ring: Defining dependency "ring" 00:06:24.305 Message: lib/rcu: Defining dependency "rcu" 00:06:24.305 Message: lib/mempool: Defining dependency "mempool" 00:06:24.305 Message: lib/mbuf: Defining dependency "mbuf" 00:06:24.305 Fetching value of define "__PCLMUL__" : 1 (cached) 00:06:24.305 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:06:24.305 Compiler for C supports arguments -mpclmul: YES 00:06:24.305 Compiler for C supports arguments -maes: YES 00:06:24.305 Compiler for C supports arguments -mavx512f: YES (cached) 00:06:24.305 Compiler for C supports arguments -mavx512bw: YES 00:06:24.305 Compiler for C supports arguments -mavx512dq: YES 00:06:24.305 Compiler for C supports arguments -mavx512vl: YES 00:06:24.305 Compiler for C supports arguments -mvpclmulqdq: YES 00:06:24.305 Compiler for C supports arguments -mavx2: YES 00:06:24.305 Compiler for C supports arguments -mavx: YES 00:06:24.305 Message: lib/net: Defining dependency "net" 00:06:24.305 Message: lib/meter: Defining dependency "meter" 00:06:24.305 Message: lib/ethdev: Defining dependency "ethdev" 00:06:24.305 Message: lib/pci: Defining dependency "pci" 00:06:24.305 Message: lib/cmdline: Defining dependency "cmdline" 00:06:24.305 Message: lib/hash: Defining dependency "hash" 00:06:24.305 Message: lib/timer: Defining dependency "timer" 00:06:24.305 Message: lib/compressdev: Defining dependency "compressdev" 00:06:24.305 Message: lib/cryptodev: Defining dependency "cryptodev" 00:06:24.305 Message: lib/dmadev: Defining dependency "dmadev" 00:06:24.305 Compiler for C supports arguments -Wno-cast-qual: YES 00:06:24.305 Message: lib/power: Defining dependency "power" 00:06:24.305 Message: lib/reorder: Defining dependency "reorder" 00:06:24.305 Message: lib/security: Defining dependency "security" 00:06:24.305 Has header "linux/userfaultfd.h" : YES 00:06:24.305 Has header "linux/vduse.h" : YES 00:06:24.305 Message: lib/vhost: Defining dependency "vhost" 00:06:24.305 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:06:24.306 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:06:24.306 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:06:24.306 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:06:24.306 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:06:24.306 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:06:24.306 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:06:24.306 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:06:24.306 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:06:24.306 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:06:24.306 Program doxygen found: YES (/usr/local/bin/doxygen) 00:06:24.306 Configuring doxy-api-html.conf using configuration 00:06:24.306 Configuring doxy-api-man.conf using configuration 00:06:24.306 Program mandb found: YES (/usr/bin/mandb) 00:06:24.306 Program sphinx-build found: NO 00:06:24.306 Configuring rte_build_config.h using configuration 00:06:24.306 Message: 00:06:24.306 ================= 00:06:24.306 Applications Enabled 00:06:24.306 ================= 00:06:24.306 00:06:24.306 apps: 00:06:24.306 00:06:24.306 00:06:24.306 Message: 00:06:24.306 ================= 00:06:24.306 Libraries Enabled 00:06:24.306 ================= 00:06:24.306 00:06:24.306 libs: 00:06:24.306 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:06:24.306 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:06:24.306 cryptodev, dmadev, power, reorder, security, vhost, 00:06:24.306 00:06:24.306 Message: 00:06:24.306 =============== 00:06:24.306 Drivers Enabled 00:06:24.306 =============== 00:06:24.306 00:06:24.306 common: 00:06:24.306 00:06:24.306 bus: 00:06:24.306 pci, vdev, 00:06:24.306 mempool: 00:06:24.306 ring, 00:06:24.306 dma: 00:06:24.306 00:06:24.306 net: 00:06:24.306 00:06:24.306 crypto: 00:06:24.306 00:06:24.306 compress: 00:06:24.306 00:06:24.306 vdpa: 00:06:24.306 00:06:24.306 00:06:24.306 Message: 00:06:24.306 ================= 00:06:24.306 Content Skipped 00:06:24.306 ================= 00:06:24.306 00:06:24.306 apps: 00:06:24.306 dumpcap: explicitly disabled via build config 00:06:24.306 graph: explicitly disabled via build config 00:06:24.306 pdump: explicitly disabled via build config 00:06:24.306 proc-info: explicitly disabled via build config 00:06:24.306 test-acl: explicitly disabled via build config 00:06:24.306 test-bbdev: explicitly disabled via build config 00:06:24.306 test-cmdline: explicitly disabled via build config 00:06:24.306 test-compress-perf: explicitly disabled via build config 00:06:24.306 test-crypto-perf: explicitly disabled via build config 00:06:24.306 test-dma-perf: explicitly disabled via build config 00:06:24.306 test-eventdev: explicitly disabled via build config 00:06:24.306 test-fib: explicitly disabled via build config 00:06:24.306 test-flow-perf: explicitly disabled via build config 00:06:24.306 test-gpudev: explicitly disabled via build config 00:06:24.306 test-mldev: explicitly disabled via build config 00:06:24.306 test-pipeline: explicitly disabled via build config 00:06:24.306 test-pmd: explicitly disabled via build config 00:06:24.306 test-regex: explicitly disabled via build config 00:06:24.306 test-sad: explicitly disabled via build config 00:06:24.306 test-security-perf: explicitly disabled via build config 00:06:24.306 00:06:24.306 libs: 00:06:24.306 argparse: explicitly disabled via build config 00:06:24.306 metrics: explicitly disabled via build config 00:06:24.306 acl: explicitly disabled via build config 00:06:24.306 bbdev: explicitly disabled via build config 00:06:24.306 bitratestats: explicitly disabled via build config 00:06:24.306 bpf: explicitly disabled via build config 00:06:24.306 cfgfile: explicitly disabled via build config 00:06:24.306 distributor: explicitly disabled via build config 00:06:24.306 efd: explicitly disabled via build config 00:06:24.306 eventdev: explicitly disabled via build config 00:06:24.306 dispatcher: explicitly disabled via build config 00:06:24.306 gpudev: explicitly disabled via build config 00:06:24.306 gro: explicitly disabled via build config 00:06:24.306 gso: explicitly disabled via build config 00:06:24.306 ip_frag: explicitly disabled via build config 00:06:24.306 jobstats: explicitly disabled via build config 00:06:24.306 latencystats: explicitly disabled via build config 00:06:24.306 lpm: explicitly disabled via build config 00:06:24.306 member: explicitly disabled via build config 00:06:24.306 pcapng: explicitly disabled via build config 00:06:24.306 rawdev: explicitly disabled via build config 00:06:24.306 regexdev: explicitly disabled via build config 00:06:24.306 mldev: explicitly disabled via build config 00:06:24.306 rib: explicitly disabled via build config 00:06:24.306 sched: explicitly disabled via build config 00:06:24.306 stack: explicitly disabled via build config 00:06:24.306 ipsec: explicitly disabled via build config 00:06:24.306 pdcp: explicitly disabled via build config 00:06:24.306 fib: explicitly disabled via build config 00:06:24.306 port: explicitly disabled via build config 00:06:24.306 pdump: explicitly disabled via build config 00:06:24.306 table: explicitly disabled via build config 00:06:24.306 pipeline: explicitly disabled via build config 00:06:24.306 graph: explicitly disabled via build config 00:06:24.306 node: explicitly disabled via build config 00:06:24.306 00:06:24.306 drivers: 00:06:24.306 common/cpt: not in enabled drivers build config 00:06:24.306 common/dpaax: not in enabled drivers build config 00:06:24.306 common/iavf: not in enabled drivers build config 00:06:24.306 common/idpf: not in enabled drivers build config 00:06:24.306 common/ionic: not in enabled drivers build config 00:06:24.306 common/mvep: not in enabled drivers build config 00:06:24.306 common/octeontx: not in enabled drivers build config 00:06:24.306 bus/auxiliary: not in enabled drivers build config 00:06:24.306 bus/cdx: not in enabled drivers build config 00:06:24.306 bus/dpaa: not in enabled drivers build config 00:06:24.306 bus/fslmc: not in enabled drivers build config 00:06:24.306 bus/ifpga: not in enabled drivers build config 00:06:24.306 bus/platform: not in enabled drivers build config 00:06:24.306 bus/uacce: not in enabled drivers build config 00:06:24.306 bus/vmbus: not in enabled drivers build config 00:06:24.306 common/cnxk: not in enabled drivers build config 00:06:24.306 common/mlx5: not in enabled drivers build config 00:06:24.306 common/nfp: not in enabled drivers build config 00:06:24.306 common/nitrox: not in enabled drivers build config 00:06:24.306 common/qat: not in enabled drivers build config 00:06:24.306 common/sfc_efx: not in enabled drivers build config 00:06:24.306 mempool/bucket: not in enabled drivers build config 00:06:24.306 mempool/cnxk: not in enabled drivers build config 00:06:24.306 mempool/dpaa: not in enabled drivers build config 00:06:24.306 mempool/dpaa2: not in enabled drivers build config 00:06:24.306 mempool/octeontx: not in enabled drivers build config 00:06:24.306 mempool/stack: not in enabled drivers build config 00:06:24.306 dma/cnxk: not in enabled drivers build config 00:06:24.306 dma/dpaa: not in enabled drivers build config 00:06:24.306 dma/dpaa2: not in enabled drivers build config 00:06:24.306 dma/hisilicon: not in enabled drivers build config 00:06:24.306 dma/idxd: not in enabled drivers build config 00:06:24.306 dma/ioat: not in enabled drivers build config 00:06:24.306 dma/skeleton: not in enabled drivers build config 00:06:24.306 net/af_packet: not in enabled drivers build config 00:06:24.306 net/af_xdp: not in enabled drivers build config 00:06:24.306 net/ark: not in enabled drivers build config 00:06:24.306 net/atlantic: not in enabled drivers build config 00:06:24.306 net/avp: not in enabled drivers build config 00:06:24.306 net/axgbe: not in enabled drivers build config 00:06:24.306 net/bnx2x: not in enabled drivers build config 00:06:24.306 net/bnxt: not in enabled drivers build config 00:06:24.306 net/bonding: not in enabled drivers build config 00:06:24.306 net/cnxk: not in enabled drivers build config 00:06:24.306 net/cpfl: not in enabled drivers build config 00:06:24.306 net/cxgbe: not in enabled drivers build config 00:06:24.306 net/dpaa: not in enabled drivers build config 00:06:24.306 net/dpaa2: not in enabled drivers build config 00:06:24.306 net/e1000: not in enabled drivers build config 00:06:24.306 net/ena: not in enabled drivers build config 00:06:24.306 net/enetc: not in enabled drivers build config 00:06:24.306 net/enetfec: not in enabled drivers build config 00:06:24.306 net/enic: not in enabled drivers build config 00:06:24.306 net/failsafe: not in enabled drivers build config 00:06:24.306 net/fm10k: not in enabled drivers build config 00:06:24.306 net/gve: not in enabled drivers build config 00:06:24.306 net/hinic: not in enabled drivers build config 00:06:24.306 net/hns3: not in enabled drivers build config 00:06:24.306 net/i40e: not in enabled drivers build config 00:06:24.306 net/iavf: not in enabled drivers build config 00:06:24.306 net/ice: not in enabled drivers build config 00:06:24.306 net/idpf: not in enabled drivers build config 00:06:24.306 net/igc: not in enabled drivers build config 00:06:24.306 net/ionic: not in enabled drivers build config 00:06:24.306 net/ipn3ke: not in enabled drivers build config 00:06:24.306 net/ixgbe: not in enabled drivers build config 00:06:24.306 net/mana: not in enabled drivers build config 00:06:24.306 net/memif: not in enabled drivers build config 00:06:24.306 net/mlx4: not in enabled drivers build config 00:06:24.306 net/mlx5: not in enabled drivers build config 00:06:24.306 net/mvneta: not in enabled drivers build config 00:06:24.306 net/mvpp2: not in enabled drivers build config 00:06:24.306 net/netvsc: not in enabled drivers build config 00:06:24.306 net/nfb: not in enabled drivers build config 00:06:24.306 net/nfp: not in enabled drivers build config 00:06:24.306 net/ngbe: not in enabled drivers build config 00:06:24.306 net/null: not in enabled drivers build config 00:06:24.306 net/octeontx: not in enabled drivers build config 00:06:24.306 net/octeon_ep: not in enabled drivers build config 00:06:24.306 net/pcap: not in enabled drivers build config 00:06:24.306 net/pfe: not in enabled drivers build config 00:06:24.307 net/qede: not in enabled drivers build config 00:06:24.307 net/ring: not in enabled drivers build config 00:06:24.307 net/sfc: not in enabled drivers build config 00:06:24.307 net/softnic: not in enabled drivers build config 00:06:24.307 net/tap: not in enabled drivers build config 00:06:24.307 net/thunderx: not in enabled drivers build config 00:06:24.307 net/txgbe: not in enabled drivers build config 00:06:24.307 net/vdev_netvsc: not in enabled drivers build config 00:06:24.307 net/vhost: not in enabled drivers build config 00:06:24.307 net/virtio: not in enabled drivers build config 00:06:24.307 net/vmxnet3: not in enabled drivers build config 00:06:24.307 raw/*: missing internal dependency, "rawdev" 00:06:24.307 crypto/armv8: not in enabled drivers build config 00:06:24.307 crypto/bcmfs: not in enabled drivers build config 00:06:24.307 crypto/caam_jr: not in enabled drivers build config 00:06:24.307 crypto/ccp: not in enabled drivers build config 00:06:24.307 crypto/cnxk: not in enabled drivers build config 00:06:24.307 crypto/dpaa_sec: not in enabled drivers build config 00:06:24.307 crypto/dpaa2_sec: not in enabled drivers build config 00:06:24.307 crypto/ipsec_mb: not in enabled drivers build config 00:06:24.307 crypto/mlx5: not in enabled drivers build config 00:06:24.307 crypto/mvsam: not in enabled drivers build config 00:06:24.307 crypto/nitrox: not in enabled drivers build config 00:06:24.307 crypto/null: not in enabled drivers build config 00:06:24.307 crypto/octeontx: not in enabled drivers build config 00:06:24.307 crypto/openssl: not in enabled drivers build config 00:06:24.307 crypto/scheduler: not in enabled drivers build config 00:06:24.307 crypto/uadk: not in enabled drivers build config 00:06:24.307 crypto/virtio: not in enabled drivers build config 00:06:24.307 compress/isal: not in enabled drivers build config 00:06:24.307 compress/mlx5: not in enabled drivers build config 00:06:24.307 compress/nitrox: not in enabled drivers build config 00:06:24.307 compress/octeontx: not in enabled drivers build config 00:06:24.307 compress/zlib: not in enabled drivers build config 00:06:24.307 regex/*: missing internal dependency, "regexdev" 00:06:24.307 ml/*: missing internal dependency, "mldev" 00:06:24.307 vdpa/ifc: not in enabled drivers build config 00:06:24.307 vdpa/mlx5: not in enabled drivers build config 00:06:24.307 vdpa/nfp: not in enabled drivers build config 00:06:24.307 vdpa/sfc: not in enabled drivers build config 00:06:24.307 event/*: missing internal dependency, "eventdev" 00:06:24.307 baseband/*: missing internal dependency, "bbdev" 00:06:24.307 gpu/*: missing internal dependency, "gpudev" 00:06:24.307 00:06:24.307 00:06:24.307 Build targets in project: 85 00:06:24.307 00:06:24.307 DPDK 24.03.0 00:06:24.307 00:06:24.307 User defined options 00:06:24.307 buildtype : debug 00:06:24.307 default_library : shared 00:06:24.307 libdir : lib 00:06:24.307 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:24.307 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:06:24.307 c_link_args : 00:06:24.307 cpu_instruction_set: native 00:06:24.307 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:06:24.307 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:06:24.307 enable_docs : false 00:06:24.307 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:06:24.307 enable_kmods : false 00:06:24.307 max_lcores : 128 00:06:24.307 tests : false 00:06:24.307 00:06:24.307 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:24.307 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:06:24.307 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:06:24.307 [2/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:06:24.307 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:06:24.307 [4/268] Linking static target lib/librte_kvargs.a 00:06:24.307 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:06:24.307 [6/268] Linking static target lib/librte_log.a 00:06:24.565 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:06:24.824 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:06:24.824 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:06:24.824 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:06:24.824 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:06:25.082 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:06:25.082 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:06:25.082 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:06:25.082 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:06:25.340 [16/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:06:25.340 [17/268] Linking target lib/librte_log.so.24.1 00:06:25.340 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:06:25.340 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:06:25.340 [20/268] Linking static target lib/librte_telemetry.a 00:06:25.599 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:06:25.599 [22/268] Linking target lib/librte_kvargs.so.24.1 00:06:25.599 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:06:25.857 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:06:25.857 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:06:25.857 [26/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:06:25.857 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:06:25.857 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:06:25.857 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:06:26.115 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:06:26.115 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:06:26.116 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:06:26.116 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:06:26.374 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:06:26.374 [35/268] Linking target lib/librte_telemetry.so.24.1 00:06:26.374 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:06:26.631 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:06:26.631 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:06:26.631 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:06:26.889 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:06:26.889 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:06:26.889 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:06:26.889 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:06:26.889 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:06:26.889 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:06:26.889 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:06:27.147 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:06:27.147 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:06:27.147 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:06:27.405 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:06:27.405 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:06:27.662 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:06:27.662 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:06:27.920 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:06:27.920 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:06:27.920 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:06:27.920 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:06:28.178 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:06:28.178 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:06:28.178 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:06:28.178 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:06:28.437 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:06:28.437 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:06:28.437 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:06:28.696 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:06:28.954 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:06:28.954 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:06:29.213 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:06:29.213 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:06:29.213 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:06:29.213 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:06:29.213 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:06:29.471 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:06:29.471 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:06:29.471 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:06:29.471 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:06:29.729 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:06:29.729 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:06:29.988 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:06:29.988 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:06:29.988 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:06:30.247 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:06:30.247 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:06:30.506 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:06:30.506 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:06:30.506 [86/268] Linking static target lib/librte_ring.a 00:06:30.506 [87/268] Linking static target lib/librte_eal.a 00:06:30.506 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:06:30.506 [89/268] Linking static target lib/librte_rcu.a 00:06:30.767 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:06:30.767 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:06:30.767 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:06:30.767 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:06:30.767 [94/268] Linking static target lib/librte_mempool.a 00:06:30.767 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:06:31.026 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:06:31.026 [97/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:06:31.285 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:06:31.285 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:06:31.285 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:06:31.285 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:06:31.285 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:06:31.285 [103/268] Linking static target lib/librte_mbuf.a 00:06:31.543 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:06:31.543 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:06:31.543 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:06:31.801 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:06:31.801 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:06:31.801 [109/268] Linking static target lib/librte_net.a 00:06:32.059 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:06:32.059 [111/268] Linking static target lib/librte_meter.a 00:06:32.059 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:06:32.059 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:06:32.317 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:06:32.318 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:06:32.318 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:06:32.318 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:06:32.576 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:06:32.576 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:06:32.835 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:06:33.093 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:06:33.352 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:06:33.352 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:06:33.609 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:06:33.609 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:06:33.609 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:06:33.609 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:06:33.609 [128/268] Linking static target lib/librte_pci.a 00:06:33.609 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:06:33.609 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:06:33.867 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:06:33.867 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:06:33.867 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:06:33.867 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:06:33.867 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:06:33.867 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:06:33.867 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:06:34.126 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:06:34.126 [139/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:34.126 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:06:34.126 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:06:34.126 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:06:34.126 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:06:34.126 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:06:34.126 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:06:34.126 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:06:34.126 [147/268] Linking static target lib/librte_ethdev.a 00:06:34.693 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:06:34.693 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:06:34.693 [150/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:06:34.693 [151/268] Linking static target lib/librte_cmdline.a 00:06:34.950 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:06:34.950 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:06:34.950 [154/268] Linking static target lib/librte_timer.a 00:06:35.209 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:06:35.209 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:06:35.209 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:06:35.209 [158/268] Linking static target lib/librte_hash.a 00:06:35.209 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:06:35.468 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:06:35.726 [161/268] Linking static target lib/librte_compressdev.a 00:06:35.726 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:06:35.726 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:06:35.726 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:06:35.984 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:06:35.984 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:06:36.243 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:06:36.243 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:06:36.501 [169/268] Linking static target lib/librte_dmadev.a 00:06:36.501 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:06:36.501 [171/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:06:36.501 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:06:36.501 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:06:36.501 [174/268] Linking static target lib/librte_cryptodev.a 00:06:36.501 [175/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:06:36.501 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:36.501 [177/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:06:36.759 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:06:37.017 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:06:37.017 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:06:37.017 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:06:37.017 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:06:37.276 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:06:37.276 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:06:37.276 [185/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:37.276 [186/268] Linking static target lib/librte_power.a 00:06:37.534 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:06:37.534 [188/268] Linking static target lib/librte_reorder.a 00:06:37.792 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:06:37.792 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:06:38.049 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:06:38.049 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:06:38.049 [193/268] Linking static target lib/librte_security.a 00:06:38.307 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:06:38.307 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:06:38.875 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:06:38.875 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:06:38.875 [198/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:06:39.133 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:06:39.133 [200/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:39.133 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:06:39.133 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:06:39.391 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:06:39.650 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:06:39.650 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:06:39.909 [206/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:06:39.909 [207/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:06:39.909 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:06:39.909 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:06:39.909 [210/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:06:39.909 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:06:39.909 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:06:40.168 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:06:40.168 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:40.168 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:40.168 [216/268] Linking static target drivers/librte_bus_vdev.a 00:06:40.168 [217/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:06:40.168 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:40.168 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:40.168 [220/268] Linking static target drivers/librte_bus_pci.a 00:06:40.427 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:06:40.427 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:06:40.427 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:40.685 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:06:40.685 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:40.685 [226/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:40.685 [227/268] Linking static target drivers/librte_mempool_ring.a 00:06:40.944 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:41.203 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:06:41.203 [230/268] Linking static target lib/librte_vhost.a 00:06:42.150 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:06:42.150 [232/268] Linking target lib/librte_eal.so.24.1 00:06:42.438 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:06:42.438 [234/268] Linking target lib/librte_meter.so.24.1 00:06:42.438 [235/268] Linking target lib/librte_pci.so.24.1 00:06:42.438 [236/268] Linking target lib/librte_ring.so.24.1 00:06:42.438 [237/268] Linking target lib/librte_timer.so.24.1 00:06:42.438 [238/268] Linking target lib/librte_dmadev.so.24.1 00:06:42.438 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:06:42.438 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:06:42.438 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:06:42.438 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:06:42.438 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:06:42.438 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:06:42.438 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:06:42.438 [246/268] Linking target lib/librte_rcu.so.24.1 00:06:42.695 [247/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:42.695 [248/268] Linking target lib/librte_mempool.so.24.1 00:06:42.695 [249/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:06:42.695 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:06:42.695 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:06:42.695 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:06:42.695 [253/268] Linking target lib/librte_mbuf.so.24.1 00:06:42.953 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:06:42.953 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:06:42.953 [256/268] Linking target lib/librte_net.so.24.1 00:06:42.953 [257/268] Linking target lib/librte_reorder.so.24.1 00:06:42.953 [258/268] Linking target lib/librte_compressdev.so.24.1 00:06:43.212 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:06:43.212 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:06:43.212 [261/268] Linking target lib/librte_hash.so.24.1 00:06:43.212 [262/268] Linking target lib/librte_cmdline.so.24.1 00:06:43.212 [263/268] Linking target lib/librte_security.so.24.1 00:06:43.212 [264/268] Linking target lib/librte_ethdev.so.24.1 00:06:43.212 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:06:43.470 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:06:43.470 [267/268] Linking target lib/librte_power.so.24.1 00:06:43.471 [268/268] Linking target lib/librte_vhost.so.24.1 00:06:43.471 INFO: autodetecting backend as ninja 00:06:43.471 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:07:15.579 CC lib/ut/ut.o 00:07:15.579 CC lib/log/log.o 00:07:15.579 CC lib/log/log_flags.o 00:07:15.579 CC lib/ut_mock/mock.o 00:07:15.579 CC lib/log/log_deprecated.o 00:07:15.579 LIB libspdk_ut_mock.a 00:07:15.579 LIB libspdk_ut.a 00:07:15.579 LIB libspdk_log.a 00:07:15.579 SO libspdk_ut_mock.so.6.0 00:07:15.579 SO libspdk_ut.so.2.0 00:07:15.579 SO libspdk_log.so.7.1 00:07:15.579 SYMLINK libspdk_ut_mock.so 00:07:15.579 SYMLINK libspdk_ut.so 00:07:15.579 SYMLINK libspdk_log.so 00:07:15.579 CC lib/ioat/ioat.o 00:07:15.579 CC lib/dma/dma.o 00:07:15.579 CC lib/util/base64.o 00:07:15.579 CC lib/util/bit_array.o 00:07:15.579 CC lib/util/cpuset.o 00:07:15.579 CC lib/util/crc16.o 00:07:15.579 CC lib/util/crc32.o 00:07:15.579 CC lib/util/crc32c.o 00:07:15.579 CXX lib/trace_parser/trace.o 00:07:15.579 CC lib/vfio_user/host/vfio_user_pci.o 00:07:15.579 CC lib/util/crc32_ieee.o 00:07:15.579 CC lib/util/crc64.o 00:07:15.579 CC lib/vfio_user/host/vfio_user.o 00:07:15.579 CC lib/util/dif.o 00:07:15.579 CC lib/util/fd.o 00:07:15.579 LIB libspdk_dma.a 00:07:15.579 CC lib/util/fd_group.o 00:07:15.579 SO libspdk_dma.so.5.0 00:07:15.579 LIB libspdk_ioat.a 00:07:15.579 SO libspdk_ioat.so.7.0 00:07:15.579 SYMLINK libspdk_dma.so 00:07:15.579 CC lib/util/file.o 00:07:15.579 CC lib/util/hexlify.o 00:07:15.579 CC lib/util/iov.o 00:07:15.579 SYMLINK libspdk_ioat.so 00:07:15.579 CC lib/util/math.o 00:07:15.579 CC lib/util/net.o 00:07:15.579 CC lib/util/pipe.o 00:07:15.579 LIB libspdk_vfio_user.a 00:07:15.579 SO libspdk_vfio_user.so.5.0 00:07:15.579 CC lib/util/strerror_tls.o 00:07:15.579 CC lib/util/string.o 00:07:15.579 CC lib/util/uuid.o 00:07:15.579 CC lib/util/xor.o 00:07:15.579 SYMLINK libspdk_vfio_user.so 00:07:15.579 CC lib/util/zipf.o 00:07:15.579 CC lib/util/md5.o 00:07:15.579 LIB libspdk_util.a 00:07:15.579 SO libspdk_util.so.10.1 00:07:15.579 SYMLINK libspdk_util.so 00:07:15.579 LIB libspdk_trace_parser.a 00:07:15.579 SO libspdk_trace_parser.so.6.0 00:07:15.579 SYMLINK libspdk_trace_parser.so 00:07:15.579 CC lib/idxd/idxd.o 00:07:15.579 CC lib/idxd/idxd_user.o 00:07:15.579 CC lib/rdma_utils/rdma_utils.o 00:07:15.579 CC lib/env_dpdk/env.o 00:07:15.579 CC lib/idxd/idxd_kernel.o 00:07:15.579 CC lib/env_dpdk/memory.o 00:07:15.579 CC lib/env_dpdk/pci.o 00:07:15.579 CC lib/json/json_parse.o 00:07:15.579 CC lib/conf/conf.o 00:07:15.579 CC lib/vmd/vmd.o 00:07:15.579 LIB libspdk_conf.a 00:07:15.579 CC lib/vmd/led.o 00:07:15.579 CC lib/env_dpdk/init.o 00:07:15.579 SO libspdk_conf.so.6.0 00:07:15.579 LIB libspdk_rdma_utils.a 00:07:15.579 SYMLINK libspdk_conf.so 00:07:15.579 CC lib/json/json_util.o 00:07:15.579 CC lib/json/json_write.o 00:07:15.579 SO libspdk_rdma_utils.so.1.0 00:07:15.579 CC lib/env_dpdk/threads.o 00:07:15.579 CC lib/env_dpdk/pci_ioat.o 00:07:15.579 SYMLINK libspdk_rdma_utils.so 00:07:15.579 CC lib/env_dpdk/pci_virtio.o 00:07:15.579 CC lib/env_dpdk/pci_vmd.o 00:07:15.579 CC lib/env_dpdk/pci_idxd.o 00:07:15.579 LIB libspdk_idxd.a 00:07:15.579 CC lib/env_dpdk/pci_event.o 00:07:15.579 SO libspdk_idxd.so.12.1 00:07:15.579 LIB libspdk_json.a 00:07:15.579 CC lib/env_dpdk/sigbus_handler.o 00:07:15.579 SO libspdk_json.so.6.0 00:07:15.579 CC lib/env_dpdk/pci_dpdk.o 00:07:15.579 CC lib/env_dpdk/pci_dpdk_2207.o 00:07:15.579 SYMLINK libspdk_idxd.so 00:07:15.579 CC lib/env_dpdk/pci_dpdk_2211.o 00:07:15.579 LIB libspdk_vmd.a 00:07:15.579 SO libspdk_vmd.so.6.0 00:07:15.579 SYMLINK libspdk_json.so 00:07:15.579 CC lib/rdma_provider/rdma_provider_verbs.o 00:07:15.579 CC lib/rdma_provider/common.o 00:07:15.579 SYMLINK libspdk_vmd.so 00:07:15.579 CC lib/jsonrpc/jsonrpc_server.o 00:07:15.579 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:15.579 CC lib/jsonrpc/jsonrpc_client.o 00:07:15.579 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:15.579 LIB libspdk_rdma_provider.a 00:07:15.579 SO libspdk_rdma_provider.so.7.0 00:07:15.579 SYMLINK libspdk_rdma_provider.so 00:07:15.579 LIB libspdk_jsonrpc.a 00:07:15.579 SO libspdk_jsonrpc.so.6.0 00:07:15.579 SYMLINK libspdk_jsonrpc.so 00:07:15.579 LIB libspdk_env_dpdk.a 00:07:15.579 SO libspdk_env_dpdk.so.15.1 00:07:15.579 CC lib/rpc/rpc.o 00:07:15.579 SYMLINK libspdk_env_dpdk.so 00:07:15.579 LIB libspdk_rpc.a 00:07:15.579 SO libspdk_rpc.so.6.0 00:07:15.579 SYMLINK libspdk_rpc.so 00:07:15.579 CC lib/trace/trace_flags.o 00:07:15.579 CC lib/trace/trace_rpc.o 00:07:15.579 CC lib/trace/trace.o 00:07:15.579 CC lib/keyring/keyring.o 00:07:15.579 CC lib/keyring/keyring_rpc.o 00:07:15.579 CC lib/notify/notify.o 00:07:15.579 CC lib/notify/notify_rpc.o 00:07:15.579 LIB libspdk_notify.a 00:07:15.579 SO libspdk_notify.so.6.0 00:07:15.579 SYMLINK libspdk_notify.so 00:07:15.579 LIB libspdk_keyring.a 00:07:15.579 LIB libspdk_trace.a 00:07:15.579 SO libspdk_keyring.so.2.0 00:07:15.579 SO libspdk_trace.so.11.0 00:07:15.579 SYMLINK libspdk_keyring.so 00:07:15.579 SYMLINK libspdk_trace.so 00:07:15.579 CC lib/sock/sock.o 00:07:15.579 CC lib/sock/sock_rpc.o 00:07:15.579 CC lib/thread/iobuf.o 00:07:15.579 CC lib/thread/thread.o 00:07:16.146 LIB libspdk_sock.a 00:07:16.146 SO libspdk_sock.so.10.0 00:07:16.146 SYMLINK libspdk_sock.so 00:07:16.405 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:16.405 CC lib/nvme/nvme_ctrlr.o 00:07:16.405 CC lib/nvme/nvme_fabric.o 00:07:16.405 CC lib/nvme/nvme_ns_cmd.o 00:07:16.405 CC lib/nvme/nvme_pcie_common.o 00:07:16.405 CC lib/nvme/nvme_ns.o 00:07:16.405 CC lib/nvme/nvme_qpair.o 00:07:16.405 CC lib/nvme/nvme_pcie.o 00:07:16.405 CC lib/nvme/nvme.o 00:07:17.341 CC lib/nvme/nvme_quirks.o 00:07:17.341 LIB libspdk_thread.a 00:07:17.341 CC lib/nvme/nvme_transport.o 00:07:17.341 SO libspdk_thread.so.11.0 00:07:17.341 CC lib/nvme/nvme_discovery.o 00:07:17.341 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:17.341 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:17.341 SYMLINK libspdk_thread.so 00:07:17.341 CC lib/nvme/nvme_tcp.o 00:07:17.601 CC lib/nvme/nvme_opal.o 00:07:17.601 CC lib/nvme/nvme_io_msg.o 00:07:17.601 CC lib/nvme/nvme_poll_group.o 00:07:17.859 CC lib/nvme/nvme_zns.o 00:07:18.118 CC lib/accel/accel.o 00:07:18.377 CC lib/init/json_config.o 00:07:18.377 CC lib/blob/blobstore.o 00:07:18.377 CC lib/virtio/virtio.o 00:07:18.377 CC lib/virtio/virtio_vhost_user.o 00:07:18.377 CC lib/init/subsystem.o 00:07:18.635 CC lib/fsdev/fsdev.o 00:07:18.635 CC lib/init/subsystem_rpc.o 00:07:18.635 CC lib/init/rpc.o 00:07:18.635 CC lib/virtio/virtio_vfio_user.o 00:07:18.894 CC lib/nvme/nvme_stubs.o 00:07:18.894 CC lib/nvme/nvme_auth.o 00:07:18.894 CC lib/nvme/nvme_cuse.o 00:07:18.894 LIB libspdk_init.a 00:07:18.894 SO libspdk_init.so.6.0 00:07:18.894 CC lib/virtio/virtio_pci.o 00:07:19.153 CC lib/fsdev/fsdev_io.o 00:07:19.153 SYMLINK libspdk_init.so 00:07:19.153 CC lib/fsdev/fsdev_rpc.o 00:07:19.411 CC lib/nvme/nvme_rdma.o 00:07:19.411 LIB libspdk_virtio.a 00:07:19.411 SO libspdk_virtio.so.7.0 00:07:19.411 CC lib/accel/accel_rpc.o 00:07:19.411 CC lib/accel/accel_sw.o 00:07:19.411 SYMLINK libspdk_virtio.so 00:07:19.411 CC lib/blob/request.o 00:07:19.669 CC lib/event/app.o 00:07:19.669 CC lib/event/reactor.o 00:07:19.669 LIB libspdk_fsdev.a 00:07:19.669 SO libspdk_fsdev.so.2.0 00:07:19.670 CC lib/event/log_rpc.o 00:07:19.930 SYMLINK libspdk_fsdev.so 00:07:19.930 CC lib/blob/zeroes.o 00:07:19.930 LIB libspdk_accel.a 00:07:19.930 CC lib/blob/blob_bs_dev.o 00:07:19.930 SO libspdk_accel.so.16.0 00:07:19.930 CC lib/event/app_rpc.o 00:07:19.930 CC lib/event/scheduler_static.o 00:07:19.930 SYMLINK libspdk_accel.so 00:07:20.189 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:07:20.189 LIB libspdk_event.a 00:07:20.189 CC lib/bdev/bdev.o 00:07:20.189 CC lib/bdev/bdev_zone.o 00:07:20.189 CC lib/bdev/bdev_rpc.o 00:07:20.189 CC lib/bdev/scsi_nvme.o 00:07:20.189 CC lib/bdev/part.o 00:07:20.447 SO libspdk_event.so.14.0 00:07:20.447 SYMLINK libspdk_event.so 00:07:21.014 LIB libspdk_fuse_dispatcher.a 00:07:21.014 SO libspdk_fuse_dispatcher.so.1.0 00:07:21.014 SYMLINK libspdk_fuse_dispatcher.so 00:07:21.273 LIB libspdk_nvme.a 00:07:21.588 SO libspdk_nvme.so.15.0 00:07:21.846 SYMLINK libspdk_nvme.so 00:07:22.415 LIB libspdk_blob.a 00:07:22.415 SO libspdk_blob.so.11.0 00:07:22.673 SYMLINK libspdk_blob.so 00:07:22.933 CC lib/lvol/lvol.o 00:07:22.933 CC lib/blobfs/blobfs.o 00:07:22.933 CC lib/blobfs/tree.o 00:07:23.191 LIB libspdk_bdev.a 00:07:23.449 SO libspdk_bdev.so.17.0 00:07:23.449 SYMLINK libspdk_bdev.so 00:07:23.707 CC lib/nbd/nbd.o 00:07:23.707 CC lib/nbd/nbd_rpc.o 00:07:23.707 CC lib/scsi/port.o 00:07:23.707 CC lib/scsi/dev.o 00:07:23.707 CC lib/scsi/lun.o 00:07:23.707 CC lib/ublk/ublk.o 00:07:23.707 CC lib/ftl/ftl_core.o 00:07:23.707 CC lib/nvmf/ctrlr.o 00:07:23.707 LIB libspdk_blobfs.a 00:07:23.965 SO libspdk_blobfs.so.10.0 00:07:23.965 LIB libspdk_lvol.a 00:07:23.965 SYMLINK libspdk_blobfs.so 00:07:23.965 CC lib/ftl/ftl_init.o 00:07:23.965 SO libspdk_lvol.so.10.0 00:07:23.965 CC lib/scsi/scsi.o 00:07:23.965 CC lib/scsi/scsi_bdev.o 00:07:23.965 SYMLINK libspdk_lvol.so 00:07:23.965 CC lib/scsi/scsi_pr.o 00:07:24.225 CC lib/scsi/scsi_rpc.o 00:07:24.225 CC lib/nvmf/ctrlr_discovery.o 00:07:24.225 CC lib/ftl/ftl_layout.o 00:07:24.225 CC lib/scsi/task.o 00:07:24.225 LIB libspdk_nbd.a 00:07:24.225 CC lib/ftl/ftl_debug.o 00:07:24.225 SO libspdk_nbd.so.7.0 00:07:24.483 CC lib/nvmf/ctrlr_bdev.o 00:07:24.483 SYMLINK libspdk_nbd.so 00:07:24.483 CC lib/nvmf/subsystem.o 00:07:24.483 CC lib/nvmf/nvmf.o 00:07:24.483 CC lib/nvmf/nvmf_rpc.o 00:07:24.483 CC lib/ublk/ublk_rpc.o 00:07:24.483 CC lib/ftl/ftl_io.o 00:07:24.483 LIB libspdk_scsi.a 00:07:24.742 CC lib/ftl/ftl_sb.o 00:07:24.742 SO libspdk_scsi.so.9.0 00:07:24.742 LIB libspdk_ublk.a 00:07:24.742 CC lib/nvmf/transport.o 00:07:24.742 SYMLINK libspdk_scsi.so 00:07:24.742 CC lib/ftl/ftl_l2p.o 00:07:24.742 SO libspdk_ublk.so.3.0 00:07:24.742 SYMLINK libspdk_ublk.so 00:07:25.000 CC lib/nvmf/tcp.o 00:07:25.000 CC lib/nvmf/stubs.o 00:07:25.000 CC lib/iscsi/conn.o 00:07:25.000 CC lib/nvmf/mdns_server.o 00:07:25.259 CC lib/ftl/ftl_l2p_flat.o 00:07:25.518 CC lib/nvmf/rdma.o 00:07:25.518 CC lib/nvmf/auth.o 00:07:25.518 CC lib/ftl/ftl_nv_cache.o 00:07:25.518 CC lib/iscsi/init_grp.o 00:07:25.776 CC lib/iscsi/iscsi.o 00:07:25.776 CC lib/vhost/vhost.o 00:07:25.776 CC lib/iscsi/param.o 00:07:25.776 CC lib/iscsi/portal_grp.o 00:07:26.035 CC lib/iscsi/tgt_node.o 00:07:26.035 CC lib/iscsi/iscsi_subsystem.o 00:07:26.035 CC lib/iscsi/iscsi_rpc.o 00:07:26.292 CC lib/iscsi/task.o 00:07:26.292 CC lib/vhost/vhost_rpc.o 00:07:26.550 CC lib/vhost/vhost_scsi.o 00:07:26.550 CC lib/vhost/vhost_blk.o 00:07:26.550 CC lib/ftl/ftl_band.o 00:07:26.550 CC lib/ftl/ftl_band_ops.o 00:07:26.808 CC lib/ftl/ftl_writer.o 00:07:26.808 CC lib/ftl/ftl_rq.o 00:07:27.127 CC lib/ftl/ftl_reloc.o 00:07:27.127 CC lib/ftl/ftl_l2p_cache.o 00:07:27.127 CC lib/ftl/ftl_p2l.o 00:07:27.127 CC lib/ftl/ftl_p2l_log.o 00:07:27.127 CC lib/vhost/rte_vhost_user.o 00:07:27.127 CC lib/ftl/mngt/ftl_mngt.o 00:07:27.386 LIB libspdk_iscsi.a 00:07:27.386 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:27.644 SO libspdk_iscsi.so.8.0 00:07:27.644 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:27.644 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:27.644 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:27.644 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:27.644 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:27.902 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:27.902 SYMLINK libspdk_iscsi.so 00:07:27.902 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:27.902 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:27.902 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:27.902 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:28.160 LIB libspdk_nvmf.a 00:07:28.160 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:28.160 CC lib/ftl/utils/ftl_conf.o 00:07:28.160 CC lib/ftl/utils/ftl_md.o 00:07:28.160 CC lib/ftl/utils/ftl_mempool.o 00:07:28.160 SO libspdk_nvmf.so.20.0 00:07:28.419 CC lib/ftl/utils/ftl_bitmap.o 00:07:28.419 CC lib/ftl/utils/ftl_property.o 00:07:28.419 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:28.419 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:28.419 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:28.419 SYMLINK libspdk_nvmf.so 00:07:28.419 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:28.677 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:28.677 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:28.677 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:28.677 LIB libspdk_vhost.a 00:07:28.677 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:28.677 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:28.677 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:28.677 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:28.677 SO libspdk_vhost.so.8.0 00:07:28.935 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:07:28.935 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:07:28.935 SYMLINK libspdk_vhost.so 00:07:28.935 CC lib/ftl/base/ftl_base_dev.o 00:07:28.935 CC lib/ftl/base/ftl_base_bdev.o 00:07:28.935 CC lib/ftl/ftl_trace.o 00:07:29.501 LIB libspdk_ftl.a 00:07:29.758 SO libspdk_ftl.so.9.0 00:07:30.016 SYMLINK libspdk_ftl.so 00:07:30.591 CC module/env_dpdk/env_dpdk_rpc.o 00:07:30.591 CC module/scheduler/gscheduler/gscheduler.o 00:07:30.591 CC module/blob/bdev/blob_bdev.o 00:07:30.591 CC module/keyring/file/keyring.o 00:07:30.591 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:30.591 CC module/accel/ioat/accel_ioat.o 00:07:30.591 CC module/sock/posix/posix.o 00:07:30.591 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:30.591 CC module/fsdev/aio/fsdev_aio.o 00:07:30.591 CC module/accel/error/accel_error.o 00:07:30.591 LIB libspdk_env_dpdk_rpc.a 00:07:30.849 SO libspdk_env_dpdk_rpc.so.6.0 00:07:30.849 LIB libspdk_scheduler_gscheduler.a 00:07:30.849 SO libspdk_scheduler_gscheduler.so.4.0 00:07:30.849 CC module/keyring/file/keyring_rpc.o 00:07:30.849 SYMLINK libspdk_env_dpdk_rpc.so 00:07:30.849 CC module/accel/ioat/accel_ioat_rpc.o 00:07:30.849 CC module/accel/error/accel_error_rpc.o 00:07:30.849 LIB libspdk_scheduler_dpdk_governor.a 00:07:30.849 LIB libspdk_scheduler_dynamic.a 00:07:30.849 SYMLINK libspdk_scheduler_gscheduler.so 00:07:30.849 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:30.849 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:30.849 SO libspdk_scheduler_dynamic.so.4.0 00:07:30.849 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:30.849 SYMLINK libspdk_scheduler_dynamic.so 00:07:31.107 LIB libspdk_accel_ioat.a 00:07:31.107 LIB libspdk_keyring_file.a 00:07:31.107 LIB libspdk_blob_bdev.a 00:07:31.107 SO libspdk_accel_ioat.so.6.0 00:07:31.107 CC module/fsdev/aio/linux_aio_mgr.o 00:07:31.107 LIB libspdk_accel_error.a 00:07:31.107 SO libspdk_keyring_file.so.2.0 00:07:31.107 SO libspdk_blob_bdev.so.11.0 00:07:31.107 SO libspdk_accel_error.so.2.0 00:07:31.107 SYMLINK libspdk_keyring_file.so 00:07:31.107 SYMLINK libspdk_accel_ioat.so 00:07:31.107 CC module/accel/dsa/accel_dsa.o 00:07:31.107 SYMLINK libspdk_blob_bdev.so 00:07:31.107 CC module/accel/iaa/accel_iaa.o 00:07:31.107 CC module/sock/uring/uring.o 00:07:31.107 SYMLINK libspdk_accel_error.so 00:07:31.107 CC module/accel/dsa/accel_dsa_rpc.o 00:07:31.365 CC module/keyring/linux/keyring.o 00:07:31.365 CC module/accel/iaa/accel_iaa_rpc.o 00:07:31.365 LIB libspdk_fsdev_aio.a 00:07:31.624 SO libspdk_fsdev_aio.so.1.0 00:07:31.624 CC module/blobfs/bdev/blobfs_bdev.o 00:07:31.624 CC module/bdev/delay/vbdev_delay.o 00:07:31.624 LIB libspdk_accel_iaa.a 00:07:31.624 SYMLINK libspdk_fsdev_aio.so 00:07:31.624 CC module/bdev/error/vbdev_error.o 00:07:31.624 LIB libspdk_accel_dsa.a 00:07:31.624 LIB libspdk_sock_posix.a 00:07:31.624 SO libspdk_accel_iaa.so.3.0 00:07:31.624 SO libspdk_sock_posix.so.6.0 00:07:31.883 SO libspdk_accel_dsa.so.5.0 00:07:31.883 CC module/keyring/linux/keyring_rpc.o 00:07:31.883 CC module/bdev/gpt/gpt.o 00:07:31.883 SYMLINK libspdk_accel_iaa.so 00:07:31.883 CC module/bdev/gpt/vbdev_gpt.o 00:07:31.883 SYMLINK libspdk_accel_dsa.so 00:07:31.883 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:31.883 SYMLINK libspdk_sock_posix.so 00:07:31.883 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:31.883 CC module/bdev/error/vbdev_error_rpc.o 00:07:31.883 CC module/bdev/lvol/vbdev_lvol.o 00:07:31.883 LIB libspdk_keyring_linux.a 00:07:32.142 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:32.142 SO libspdk_keyring_linux.so.1.0 00:07:32.142 LIB libspdk_blobfs_bdev.a 00:07:32.142 LIB libspdk_bdev_delay.a 00:07:32.142 SYMLINK libspdk_keyring_linux.so 00:07:32.142 SO libspdk_blobfs_bdev.so.6.0 00:07:32.142 SO libspdk_bdev_delay.so.6.0 00:07:32.142 LIB libspdk_bdev_error.a 00:07:32.142 SYMLINK libspdk_blobfs_bdev.so 00:07:32.142 SYMLINK libspdk_bdev_delay.so 00:07:32.142 SO libspdk_bdev_error.so.6.0 00:07:32.142 CC module/bdev/malloc/bdev_malloc.o 00:07:32.400 LIB libspdk_bdev_gpt.a 00:07:32.400 SO libspdk_bdev_gpt.so.6.0 00:07:32.400 SYMLINK libspdk_bdev_error.so 00:07:32.400 CC module/bdev/null/bdev_null.o 00:07:32.400 CC module/bdev/nvme/bdev_nvme.o 00:07:32.400 SYMLINK libspdk_bdev_gpt.so 00:07:32.400 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:32.400 CC module/bdev/passthru/vbdev_passthru.o 00:07:32.400 CC module/bdev/raid/bdev_raid.o 00:07:32.400 LIB libspdk_sock_uring.a 00:07:32.400 SO libspdk_sock_uring.so.5.0 00:07:32.659 SYMLINK libspdk_sock_uring.so 00:07:32.660 CC module/bdev/raid/bdev_raid_rpc.o 00:07:32.660 CC module/bdev/split/vbdev_split.o 00:07:32.660 CC module/bdev/raid/bdev_raid_sb.o 00:07:32.660 LIB libspdk_bdev_lvol.a 00:07:32.660 SO libspdk_bdev_lvol.so.6.0 00:07:32.660 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:32.930 SYMLINK libspdk_bdev_lvol.so 00:07:32.930 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:32.930 CC module/bdev/raid/raid0.o 00:07:32.930 CC module/bdev/split/vbdev_split_rpc.o 00:07:32.930 CC module/bdev/null/bdev_null_rpc.o 00:07:32.930 LIB libspdk_bdev_malloc.a 00:07:32.930 SO libspdk_bdev_malloc.so.6.0 00:07:32.930 LIB libspdk_bdev_passthru.a 00:07:32.930 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:33.189 LIB libspdk_bdev_split.a 00:07:33.189 LIB libspdk_bdev_null.a 00:07:33.189 SO libspdk_bdev_passthru.so.6.0 00:07:33.189 SO libspdk_bdev_split.so.6.0 00:07:33.189 SO libspdk_bdev_null.so.6.0 00:07:33.189 SYMLINK libspdk_bdev_malloc.so 00:07:33.189 CC module/bdev/raid/raid1.o 00:07:33.189 SYMLINK libspdk_bdev_passthru.so 00:07:33.189 CC module/bdev/uring/bdev_uring.o 00:07:33.189 SYMLINK libspdk_bdev_split.so 00:07:33.189 SYMLINK libspdk_bdev_null.so 00:07:33.189 CC module/bdev/raid/concat.o 00:07:33.448 CC module/bdev/aio/bdev_aio.o 00:07:33.448 CC module/bdev/ftl/bdev_ftl.o 00:07:33.448 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:33.448 CC module/bdev/iscsi/bdev_iscsi.o 00:07:33.448 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:33.448 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:33.448 CC module/bdev/aio/bdev_aio_rpc.o 00:07:33.706 CC module/bdev/uring/bdev_uring_rpc.o 00:07:33.706 LIB libspdk_bdev_zone_block.a 00:07:33.706 SO libspdk_bdev_zone_block.so.6.0 00:07:33.706 CC module/bdev/nvme/nvme_rpc.o 00:07:33.706 SYMLINK libspdk_bdev_zone_block.so 00:07:33.706 CC module/bdev/nvme/bdev_mdns_client.o 00:07:33.706 CC module/bdev/nvme/vbdev_opal.o 00:07:33.706 LIB libspdk_bdev_iscsi.a 00:07:33.964 LIB libspdk_bdev_aio.a 00:07:33.964 SO libspdk_bdev_iscsi.so.6.0 00:07:33.964 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:33.964 LIB libspdk_bdev_raid.a 00:07:33.964 SO libspdk_bdev_aio.so.6.0 00:07:33.964 LIB libspdk_bdev_ftl.a 00:07:33.964 LIB libspdk_bdev_uring.a 00:07:33.964 SO libspdk_bdev_ftl.so.6.0 00:07:33.964 SYMLINK libspdk_bdev_iscsi.so 00:07:33.964 SO libspdk_bdev_raid.so.6.0 00:07:33.964 SO libspdk_bdev_uring.so.6.0 00:07:33.964 SYMLINK libspdk_bdev_aio.so 00:07:33.964 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:33.964 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:33.964 SYMLINK libspdk_bdev_ftl.so 00:07:33.964 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:33.964 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:34.223 SYMLINK libspdk_bdev_raid.so 00:07:34.223 SYMLINK libspdk_bdev_uring.so 00:07:34.789 LIB libspdk_bdev_virtio.a 00:07:34.789 SO libspdk_bdev_virtio.so.6.0 00:07:34.789 SYMLINK libspdk_bdev_virtio.so 00:07:35.725 LIB libspdk_bdev_nvme.a 00:07:35.984 SO libspdk_bdev_nvme.so.7.1 00:07:35.984 SYMLINK libspdk_bdev_nvme.so 00:07:36.551 CC module/event/subsystems/fsdev/fsdev.o 00:07:36.551 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:36.551 CC module/event/subsystems/sock/sock.o 00:07:36.551 CC module/event/subsystems/iobuf/iobuf.o 00:07:36.551 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:36.551 CC module/event/subsystems/scheduler/scheduler.o 00:07:36.551 CC module/event/subsystems/keyring/keyring.o 00:07:36.551 CC module/event/subsystems/vmd/vmd.o 00:07:36.551 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:36.551 LIB libspdk_event_keyring.a 00:07:36.551 LIB libspdk_event_fsdev.a 00:07:36.551 LIB libspdk_event_scheduler.a 00:07:36.810 SO libspdk_event_fsdev.so.1.0 00:07:36.810 SO libspdk_event_keyring.so.1.0 00:07:36.810 LIB libspdk_event_sock.a 00:07:36.810 SO libspdk_event_scheduler.so.4.0 00:07:36.810 LIB libspdk_event_vhost_blk.a 00:07:36.810 LIB libspdk_event_iobuf.a 00:07:36.810 SO libspdk_event_sock.so.5.0 00:07:36.810 SYMLINK libspdk_event_fsdev.so 00:07:36.810 SYMLINK libspdk_event_keyring.so 00:07:36.810 LIB libspdk_event_vmd.a 00:07:36.810 SYMLINK libspdk_event_scheduler.so 00:07:36.810 SO libspdk_event_vhost_blk.so.3.0 00:07:36.810 SO libspdk_event_iobuf.so.3.0 00:07:36.810 SYMLINK libspdk_event_sock.so 00:07:36.810 SO libspdk_event_vmd.so.6.0 00:07:36.810 SYMLINK libspdk_event_iobuf.so 00:07:36.810 SYMLINK libspdk_event_vhost_blk.so 00:07:36.810 SYMLINK libspdk_event_vmd.so 00:07:37.070 CC module/event/subsystems/accel/accel.o 00:07:37.329 LIB libspdk_event_accel.a 00:07:37.329 SO libspdk_event_accel.so.6.0 00:07:37.329 SYMLINK libspdk_event_accel.so 00:07:37.588 CC module/event/subsystems/bdev/bdev.o 00:07:37.846 LIB libspdk_event_bdev.a 00:07:37.846 SO libspdk_event_bdev.so.6.0 00:07:37.846 SYMLINK libspdk_event_bdev.so 00:07:38.104 CC module/event/subsystems/ublk/ublk.o 00:07:38.104 CC module/event/subsystems/scsi/scsi.o 00:07:38.104 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:38.104 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:38.104 CC module/event/subsystems/nbd/nbd.o 00:07:38.362 LIB libspdk_event_scsi.a 00:07:38.362 LIB libspdk_event_nbd.a 00:07:38.362 LIB libspdk_event_ublk.a 00:07:38.362 SO libspdk_event_scsi.so.6.0 00:07:38.362 SO libspdk_event_nbd.so.6.0 00:07:38.362 SO libspdk_event_ublk.so.3.0 00:07:38.362 SYMLINK libspdk_event_scsi.so 00:07:38.362 SYMLINK libspdk_event_nbd.so 00:07:38.362 SYMLINK libspdk_event_ublk.so 00:07:38.362 LIB libspdk_event_nvmf.a 00:07:38.621 SO libspdk_event_nvmf.so.6.0 00:07:38.621 SYMLINK libspdk_event_nvmf.so 00:07:38.621 CC module/event/subsystems/iscsi/iscsi.o 00:07:38.621 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:38.880 LIB libspdk_event_iscsi.a 00:07:38.880 LIB libspdk_event_vhost_scsi.a 00:07:38.880 SO libspdk_event_iscsi.so.6.0 00:07:38.880 SO libspdk_event_vhost_scsi.so.3.0 00:07:38.880 SYMLINK libspdk_event_iscsi.so 00:07:38.880 SYMLINK libspdk_event_vhost_scsi.so 00:07:39.138 SO libspdk.so.6.0 00:07:39.138 SYMLINK libspdk.so 00:07:39.396 CXX app/trace/trace.o 00:07:39.396 CC app/trace_record/trace_record.o 00:07:39.396 CC app/spdk_nvme_identify/identify.o 00:07:39.396 CC app/spdk_lspci/spdk_lspci.o 00:07:39.396 CC app/spdk_nvme_perf/perf.o 00:07:39.396 CC app/nvmf_tgt/nvmf_main.o 00:07:39.396 CC app/iscsi_tgt/iscsi_tgt.o 00:07:39.396 CC app/spdk_tgt/spdk_tgt.o 00:07:39.396 CC test/thread/poller_perf/poller_perf.o 00:07:39.396 CC examples/util/zipf/zipf.o 00:07:39.654 LINK spdk_lspci 00:07:39.654 LINK spdk_trace_record 00:07:39.654 LINK zipf 00:07:39.654 LINK nvmf_tgt 00:07:39.654 LINK spdk_trace 00:07:39.654 LINK poller_perf 00:07:39.654 LINK iscsi_tgt 00:07:39.912 LINK spdk_tgt 00:07:40.170 TEST_HEADER include/spdk/accel.h 00:07:40.170 TEST_HEADER include/spdk/accel_module.h 00:07:40.170 TEST_HEADER include/spdk/assert.h 00:07:40.170 TEST_HEADER include/spdk/barrier.h 00:07:40.170 TEST_HEADER include/spdk/base64.h 00:07:40.170 TEST_HEADER include/spdk/bdev.h 00:07:40.170 TEST_HEADER include/spdk/bdev_module.h 00:07:40.170 TEST_HEADER include/spdk/bdev_zone.h 00:07:40.170 TEST_HEADER include/spdk/bit_array.h 00:07:40.170 TEST_HEADER include/spdk/bit_pool.h 00:07:40.170 TEST_HEADER include/spdk/blob_bdev.h 00:07:40.170 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:40.170 TEST_HEADER include/spdk/blobfs.h 00:07:40.170 TEST_HEADER include/spdk/blob.h 00:07:40.170 TEST_HEADER include/spdk/conf.h 00:07:40.170 TEST_HEADER include/spdk/config.h 00:07:40.170 TEST_HEADER include/spdk/cpuset.h 00:07:40.170 TEST_HEADER include/spdk/crc16.h 00:07:40.170 TEST_HEADER include/spdk/crc32.h 00:07:40.170 TEST_HEADER include/spdk/crc64.h 00:07:40.170 CC test/dma/test_dma/test_dma.o 00:07:40.170 TEST_HEADER include/spdk/dif.h 00:07:40.170 TEST_HEADER include/spdk/dma.h 00:07:40.170 TEST_HEADER include/spdk/endian.h 00:07:40.170 TEST_HEADER include/spdk/env_dpdk.h 00:07:40.170 TEST_HEADER include/spdk/env.h 00:07:40.170 TEST_HEADER include/spdk/event.h 00:07:40.170 TEST_HEADER include/spdk/fd_group.h 00:07:40.170 TEST_HEADER include/spdk/fd.h 00:07:40.170 TEST_HEADER include/spdk/file.h 00:07:40.170 TEST_HEADER include/spdk/fsdev.h 00:07:40.170 TEST_HEADER include/spdk/fsdev_module.h 00:07:40.170 TEST_HEADER include/spdk/ftl.h 00:07:40.170 CC examples/ioat/perf/perf.o 00:07:40.170 TEST_HEADER include/spdk/fuse_dispatcher.h 00:07:40.170 TEST_HEADER include/spdk/gpt_spec.h 00:07:40.170 TEST_HEADER include/spdk/hexlify.h 00:07:40.170 TEST_HEADER include/spdk/histogram_data.h 00:07:40.170 TEST_HEADER include/spdk/idxd.h 00:07:40.170 TEST_HEADER include/spdk/idxd_spec.h 00:07:40.170 TEST_HEADER include/spdk/init.h 00:07:40.170 TEST_HEADER include/spdk/ioat.h 00:07:40.170 TEST_HEADER include/spdk/ioat_spec.h 00:07:40.170 TEST_HEADER include/spdk/iscsi_spec.h 00:07:40.170 TEST_HEADER include/spdk/json.h 00:07:40.170 TEST_HEADER include/spdk/jsonrpc.h 00:07:40.170 TEST_HEADER include/spdk/keyring.h 00:07:40.170 CC test/app/bdev_svc/bdev_svc.o 00:07:40.170 TEST_HEADER include/spdk/keyring_module.h 00:07:40.170 TEST_HEADER include/spdk/likely.h 00:07:40.170 TEST_HEADER include/spdk/log.h 00:07:40.171 CC test/rpc_client/rpc_client_test.o 00:07:40.171 TEST_HEADER include/spdk/lvol.h 00:07:40.171 TEST_HEADER include/spdk/md5.h 00:07:40.171 TEST_HEADER include/spdk/memory.h 00:07:40.171 TEST_HEADER include/spdk/mmio.h 00:07:40.171 TEST_HEADER include/spdk/nbd.h 00:07:40.171 TEST_HEADER include/spdk/net.h 00:07:40.171 TEST_HEADER include/spdk/notify.h 00:07:40.171 TEST_HEADER include/spdk/nvme.h 00:07:40.171 TEST_HEADER include/spdk/nvme_intel.h 00:07:40.171 LINK spdk_nvme_identify 00:07:40.171 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:40.171 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:40.171 TEST_HEADER include/spdk/nvme_spec.h 00:07:40.171 TEST_HEADER include/spdk/nvme_zns.h 00:07:40.171 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:40.171 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:40.171 CC app/spdk_nvme_discover/discovery_aer.o 00:07:40.171 TEST_HEADER include/spdk/nvmf.h 00:07:40.171 TEST_HEADER include/spdk/nvmf_spec.h 00:07:40.171 TEST_HEADER include/spdk/nvmf_transport.h 00:07:40.171 TEST_HEADER include/spdk/opal.h 00:07:40.171 TEST_HEADER include/spdk/opal_spec.h 00:07:40.171 TEST_HEADER include/spdk/pci_ids.h 00:07:40.171 TEST_HEADER include/spdk/pipe.h 00:07:40.171 TEST_HEADER include/spdk/queue.h 00:07:40.171 TEST_HEADER include/spdk/reduce.h 00:07:40.171 TEST_HEADER include/spdk/rpc.h 00:07:40.171 TEST_HEADER include/spdk/scheduler.h 00:07:40.171 TEST_HEADER include/spdk/scsi.h 00:07:40.171 TEST_HEADER include/spdk/scsi_spec.h 00:07:40.429 TEST_HEADER include/spdk/sock.h 00:07:40.429 TEST_HEADER include/spdk/stdinc.h 00:07:40.429 TEST_HEADER include/spdk/string.h 00:07:40.429 TEST_HEADER include/spdk/thread.h 00:07:40.429 TEST_HEADER include/spdk/trace.h 00:07:40.429 TEST_HEADER include/spdk/trace_parser.h 00:07:40.429 TEST_HEADER include/spdk/tree.h 00:07:40.429 CC test/event/event_perf/event_perf.o 00:07:40.429 TEST_HEADER include/spdk/ublk.h 00:07:40.429 TEST_HEADER include/spdk/util.h 00:07:40.429 TEST_HEADER include/spdk/uuid.h 00:07:40.429 TEST_HEADER include/spdk/version.h 00:07:40.429 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:40.429 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:40.429 TEST_HEADER include/spdk/vhost.h 00:07:40.429 TEST_HEADER include/spdk/vmd.h 00:07:40.429 TEST_HEADER include/spdk/xor.h 00:07:40.429 TEST_HEADER include/spdk/zipf.h 00:07:40.429 CXX test/cpp_headers/accel.o 00:07:40.429 CC test/env/mem_callbacks/mem_callbacks.o 00:07:40.429 LINK bdev_svc 00:07:40.429 LINK event_perf 00:07:40.429 LINK rpc_client_test 00:07:40.429 LINK spdk_nvme_discover 00:07:40.429 LINK ioat_perf 00:07:40.687 CXX test/cpp_headers/accel_module.o 00:07:40.687 CC app/spdk_top/spdk_top.o 00:07:40.687 LINK spdk_nvme_perf 00:07:40.687 LINK test_dma 00:07:40.687 CC test/event/reactor/reactor.o 00:07:40.946 CC examples/ioat/verify/verify.o 00:07:40.946 CXX test/cpp_headers/assert.o 00:07:40.946 CC test/app/histogram_perf/histogram_perf.o 00:07:40.946 CC test/app/jsoncat/jsoncat.o 00:07:40.946 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:40.946 LINK reactor 00:07:41.205 CC test/app/stub/stub.o 00:07:41.205 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:41.205 LINK histogram_perf 00:07:41.205 LINK jsoncat 00:07:41.205 LINK mem_callbacks 00:07:41.205 CXX test/cpp_headers/barrier.o 00:07:41.205 LINK verify 00:07:41.463 LINK stub 00:07:41.463 CXX test/cpp_headers/base64.o 00:07:41.463 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:41.463 CC test/env/vtophys/vtophys.o 00:07:41.463 CC test/event/reactor_perf/reactor_perf.o 00:07:41.463 LINK nvme_fuzz 00:07:41.463 CC test/event/app_repeat/app_repeat.o 00:07:41.721 LINK vtophys 00:07:41.721 CXX test/cpp_headers/bdev.o 00:07:41.721 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:41.721 CC examples/vmd/lsvmd/lsvmd.o 00:07:41.721 LINK reactor_perf 00:07:41.721 LINK app_repeat 00:07:41.721 CC examples/vmd/led/led.o 00:07:41.979 LINK lsvmd 00:07:41.979 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:41.979 CC test/env/memory/memory_ut.o 00:07:41.979 CC test/env/pci/pci_ut.o 00:07:41.979 CXX test/cpp_headers/bdev_module.o 00:07:41.979 CXX test/cpp_headers/bdev_zone.o 00:07:41.979 LINK spdk_top 00:07:41.979 LINK env_dpdk_post_init 00:07:42.238 LINK led 00:07:42.238 CC test/event/scheduler/scheduler.o 00:07:42.238 CXX test/cpp_headers/bit_array.o 00:07:42.238 LINK vhost_fuzz 00:07:42.496 CC app/vhost/vhost.o 00:07:42.496 CXX test/cpp_headers/bit_pool.o 00:07:42.496 CXX test/cpp_headers/blob_bdev.o 00:07:42.496 CC test/accel/dif/dif.o 00:07:42.496 CC examples/idxd/perf/perf.o 00:07:42.754 LINK scheduler 00:07:42.754 LINK pci_ut 00:07:42.754 LINK vhost 00:07:42.754 CC test/blobfs/mkfs/mkfs.o 00:07:42.754 CC app/spdk_dd/spdk_dd.o 00:07:43.013 CXX test/cpp_headers/blobfs_bdev.o 00:07:43.013 LINK mkfs 00:07:43.013 LINK idxd_perf 00:07:43.272 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:43.272 CXX test/cpp_headers/blobfs.o 00:07:43.272 CC app/fio/nvme/fio_plugin.o 00:07:43.272 LINK iscsi_fuzz 00:07:43.272 LINK dif 00:07:43.272 LINK memory_ut 00:07:43.272 CC examples/thread/thread/thread_ex.o 00:07:43.272 CC app/fio/bdev/fio_plugin.o 00:07:43.272 LINK spdk_dd 00:07:43.530 LINK interrupt_tgt 00:07:43.530 CC examples/sock/hello_world/hello_sock.o 00:07:43.530 CXX test/cpp_headers/blob.o 00:07:43.530 CXX test/cpp_headers/conf.o 00:07:43.530 LINK thread 00:07:43.805 CXX test/cpp_headers/config.o 00:07:43.805 LINK hello_sock 00:07:43.805 CC test/nvme/aer/aer.o 00:07:43.805 CC test/nvme/reset/reset.o 00:07:43.805 CC test/lvol/esnap/esnap.o 00:07:43.805 CC test/nvme/sgl/sgl.o 00:07:44.072 CC test/bdev/bdevio/bdevio.o 00:07:44.072 CXX test/cpp_headers/cpuset.o 00:07:44.072 LINK spdk_bdev 00:07:44.072 LINK spdk_nvme 00:07:44.072 CC examples/accel/perf/accel_perf.o 00:07:44.330 LINK sgl 00:07:44.330 LINK aer 00:07:44.330 CC examples/blob/hello_world/hello_blob.o 00:07:44.330 CXX test/cpp_headers/crc16.o 00:07:44.330 CC examples/blob/cli/blobcli.o 00:07:44.330 CC test/nvme/e2edp/nvme_dp.o 00:07:44.330 LINK reset 00:07:44.589 CC test/nvme/overhead/overhead.o 00:07:44.589 LINK bdevio 00:07:44.589 CXX test/cpp_headers/crc32.o 00:07:44.589 CC test/nvme/err_injection/err_injection.o 00:07:44.589 LINK accel_perf 00:07:44.589 LINK hello_blob 00:07:44.589 LINK nvme_dp 00:07:44.868 CXX test/cpp_headers/crc64.o 00:07:44.868 CC test/nvme/startup/startup.o 00:07:44.868 CC test/nvme/reserve/reserve.o 00:07:44.868 CXX test/cpp_headers/dif.o 00:07:44.868 LINK blobcli 00:07:44.868 LINK err_injection 00:07:44.868 LINK overhead 00:07:44.868 LINK startup 00:07:45.127 CC test/nvme/simple_copy/simple_copy.o 00:07:45.127 CC test/nvme/connect_stress/connect_stress.o 00:07:45.127 CC test/nvme/boot_partition/boot_partition.o 00:07:45.127 LINK reserve 00:07:45.127 CXX test/cpp_headers/dma.o 00:07:45.127 CXX test/cpp_headers/endian.o 00:07:45.386 CC test/nvme/compliance/nvme_compliance.o 00:07:45.386 CC test/nvme/fused_ordering/fused_ordering.o 00:07:45.386 LINK connect_stress 00:07:45.386 LINK boot_partition 00:07:45.386 CC examples/nvme/hello_world/hello_world.o 00:07:45.386 CXX test/cpp_headers/env_dpdk.o 00:07:45.386 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:45.386 LINK simple_copy 00:07:45.386 CC test/nvme/fdp/fdp.o 00:07:45.644 CXX test/cpp_headers/env.o 00:07:45.644 LINK nvme_compliance 00:07:45.644 LINK fused_ordering 00:07:45.644 CXX test/cpp_headers/event.o 00:07:45.904 CC test/nvme/cuse/cuse.o 00:07:45.904 CC examples/nvme/reconnect/reconnect.o 00:07:45.904 LINK doorbell_aers 00:07:45.904 LINK hello_world 00:07:45.904 LINK fdp 00:07:45.904 CC examples/fsdev/hello_world/hello_fsdev.o 00:07:46.163 CXX test/cpp_headers/fd_group.o 00:07:46.163 CXX test/cpp_headers/fd.o 00:07:46.163 CXX test/cpp_headers/file.o 00:07:46.163 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:46.163 CC examples/nvme/arbitration/arbitration.o 00:07:46.421 CXX test/cpp_headers/fsdev.o 00:07:46.421 LINK hello_fsdev 00:07:46.421 LINK reconnect 00:07:46.421 CC examples/bdev/hello_world/hello_bdev.o 00:07:46.421 CXX test/cpp_headers/fsdev_module.o 00:07:46.421 CC examples/bdev/bdevperf/bdevperf.o 00:07:46.421 LINK arbitration 00:07:46.681 CXX test/cpp_headers/ftl.o 00:07:46.681 CC examples/nvme/hotplug/hotplug.o 00:07:46.681 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:46.681 CC examples/nvme/abort/abort.o 00:07:46.940 LINK hello_bdev 00:07:46.940 CXX test/cpp_headers/fuse_dispatcher.o 00:07:46.940 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:46.940 LINK hotplug 00:07:46.940 LINK nvme_manage 00:07:46.940 LINK cmb_copy 00:07:47.198 CXX test/cpp_headers/gpt_spec.o 00:07:47.198 CXX test/cpp_headers/hexlify.o 00:07:47.198 CXX test/cpp_headers/histogram_data.o 00:07:47.198 CXX test/cpp_headers/idxd.o 00:07:47.198 CXX test/cpp_headers/idxd_spec.o 00:07:47.198 LINK pmr_persistence 00:07:47.198 CXX test/cpp_headers/init.o 00:07:47.457 CXX test/cpp_headers/ioat.o 00:07:47.457 CXX test/cpp_headers/ioat_spec.o 00:07:47.457 CXX test/cpp_headers/iscsi_spec.o 00:07:47.457 CXX test/cpp_headers/json.o 00:07:47.457 LINK bdevperf 00:07:47.457 LINK abort 00:07:47.457 CXX test/cpp_headers/jsonrpc.o 00:07:47.716 CXX test/cpp_headers/keyring.o 00:07:47.716 CXX test/cpp_headers/keyring_module.o 00:07:47.716 CXX test/cpp_headers/likely.o 00:07:47.716 CXX test/cpp_headers/log.o 00:07:47.716 CXX test/cpp_headers/lvol.o 00:07:47.716 CXX test/cpp_headers/md5.o 00:07:47.716 LINK cuse 00:07:47.716 CXX test/cpp_headers/memory.o 00:07:47.716 CXX test/cpp_headers/mmio.o 00:07:48.012 CXX test/cpp_headers/nbd.o 00:07:48.012 CXX test/cpp_headers/net.o 00:07:48.012 CXX test/cpp_headers/notify.o 00:07:48.012 CXX test/cpp_headers/nvme.o 00:07:48.012 CXX test/cpp_headers/nvme_intel.o 00:07:48.012 CXX test/cpp_headers/nvme_ocssd.o 00:07:48.012 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:48.012 CXX test/cpp_headers/nvme_spec.o 00:07:48.012 CXX test/cpp_headers/nvme_zns.o 00:07:48.012 CXX test/cpp_headers/nvmf_cmd.o 00:07:48.289 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:48.289 CC examples/nvmf/nvmf/nvmf.o 00:07:48.289 CXX test/cpp_headers/nvmf.o 00:07:48.289 CXX test/cpp_headers/nvmf_spec.o 00:07:48.289 CXX test/cpp_headers/nvmf_transport.o 00:07:48.289 CXX test/cpp_headers/opal.o 00:07:48.289 CXX test/cpp_headers/opal_spec.o 00:07:48.289 CXX test/cpp_headers/pci_ids.o 00:07:48.289 CXX test/cpp_headers/pipe.o 00:07:48.547 CXX test/cpp_headers/queue.o 00:07:48.548 CXX test/cpp_headers/reduce.o 00:07:48.548 CXX test/cpp_headers/rpc.o 00:07:48.548 CXX test/cpp_headers/scheduler.o 00:07:48.548 CXX test/cpp_headers/scsi.o 00:07:48.548 CXX test/cpp_headers/scsi_spec.o 00:07:48.548 CXX test/cpp_headers/sock.o 00:07:48.548 CXX test/cpp_headers/stdinc.o 00:07:48.548 LINK nvmf 00:07:48.806 CXX test/cpp_headers/string.o 00:07:48.806 CXX test/cpp_headers/thread.o 00:07:48.806 CXX test/cpp_headers/trace.o 00:07:48.806 CXX test/cpp_headers/trace_parser.o 00:07:48.806 CXX test/cpp_headers/tree.o 00:07:48.806 CXX test/cpp_headers/ublk.o 00:07:48.806 CXX test/cpp_headers/util.o 00:07:48.806 CXX test/cpp_headers/uuid.o 00:07:48.806 CXX test/cpp_headers/version.o 00:07:48.806 CXX test/cpp_headers/vfio_user_pci.o 00:07:48.806 CXX test/cpp_headers/vfio_user_spec.o 00:07:48.806 CXX test/cpp_headers/vhost.o 00:07:49.065 CXX test/cpp_headers/vmd.o 00:07:49.065 CXX test/cpp_headers/xor.o 00:07:49.065 CXX test/cpp_headers/zipf.o 00:07:50.002 LINK esnap 00:07:50.260 00:07:50.260 real 1m38.932s 00:07:50.260 user 9m18.354s 00:07:50.260 sys 1m44.095s 00:07:50.260 05:18:04 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:07:50.260 05:18:04 make -- common/autotest_common.sh@10 -- $ set +x 00:07:50.260 ************************************ 00:07:50.260 END TEST make 00:07:50.260 ************************************ 00:07:50.260 05:18:04 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:50.260 05:18:04 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:50.260 05:18:04 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:50.260 05:18:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:50.260 05:18:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:07:50.260 05:18:04 -- pm/common@44 -- $ pid=5300 00:07:50.260 05:18:04 -- pm/common@50 -- $ kill -TERM 5300 00:07:50.260 05:18:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:50.260 05:18:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:07:50.260 05:18:04 -- pm/common@44 -- $ pid=5302 00:07:50.260 05:18:04 -- pm/common@50 -- $ kill -TERM 5302 00:07:50.261 05:18:04 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:07:50.261 05:18:04 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:50.261 05:18:04 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:50.261 05:18:04 -- common/autotest_common.sh@1691 -- # lcov --version 00:07:50.261 05:18:04 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:50.261 05:18:04 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:50.261 05:18:04 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.261 05:18:04 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.261 05:18:04 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.261 05:18:04 -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.261 05:18:04 -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.261 05:18:04 -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.261 05:18:04 -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.261 05:18:04 -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.261 05:18:04 -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.261 05:18:04 -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.520 05:18:04 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.520 05:18:04 -- scripts/common.sh@344 -- # case "$op" in 00:07:50.520 05:18:04 -- scripts/common.sh@345 -- # : 1 00:07:50.520 05:18:04 -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.520 05:18:04 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.520 05:18:04 -- scripts/common.sh@365 -- # decimal 1 00:07:50.520 05:18:04 -- scripts/common.sh@353 -- # local d=1 00:07:50.520 05:18:04 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.520 05:18:04 -- scripts/common.sh@355 -- # echo 1 00:07:50.520 05:18:04 -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.520 05:18:04 -- scripts/common.sh@366 -- # decimal 2 00:07:50.520 05:18:04 -- scripts/common.sh@353 -- # local d=2 00:07:50.521 05:18:04 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.521 05:18:04 -- scripts/common.sh@355 -- # echo 2 00:07:50.521 05:18:04 -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.521 05:18:04 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.521 05:18:04 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.521 05:18:04 -- scripts/common.sh@368 -- # return 0 00:07:50.521 05:18:04 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.521 05:18:04 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:50.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.521 --rc genhtml_branch_coverage=1 00:07:50.521 --rc genhtml_function_coverage=1 00:07:50.521 --rc genhtml_legend=1 00:07:50.521 --rc geninfo_all_blocks=1 00:07:50.521 --rc geninfo_unexecuted_blocks=1 00:07:50.521 00:07:50.521 ' 00:07:50.521 05:18:04 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:50.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.521 --rc genhtml_branch_coverage=1 00:07:50.521 --rc genhtml_function_coverage=1 00:07:50.521 --rc genhtml_legend=1 00:07:50.521 --rc geninfo_all_blocks=1 00:07:50.521 --rc geninfo_unexecuted_blocks=1 00:07:50.521 00:07:50.521 ' 00:07:50.521 05:18:04 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:50.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.521 --rc genhtml_branch_coverage=1 00:07:50.521 --rc genhtml_function_coverage=1 00:07:50.521 --rc genhtml_legend=1 00:07:50.521 --rc geninfo_all_blocks=1 00:07:50.521 --rc geninfo_unexecuted_blocks=1 00:07:50.521 00:07:50.521 ' 00:07:50.521 05:18:04 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:50.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.521 --rc genhtml_branch_coverage=1 00:07:50.521 --rc genhtml_function_coverage=1 00:07:50.521 --rc genhtml_legend=1 00:07:50.521 --rc geninfo_all_blocks=1 00:07:50.521 --rc geninfo_unexecuted_blocks=1 00:07:50.521 00:07:50.521 ' 00:07:50.521 05:18:04 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:50.521 05:18:04 -- nvmf/common.sh@7 -- # uname -s 00:07:50.521 05:18:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.521 05:18:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.521 05:18:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.521 05:18:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.521 05:18:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.521 05:18:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.521 05:18:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.521 05:18:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.521 05:18:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.521 05:18:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.521 05:18:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:07:50.521 05:18:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:07:50.521 05:18:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.521 05:18:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.521 05:18:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:50.521 05:18:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:50.521 05:18:04 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:50.521 05:18:04 -- scripts/common.sh@15 -- # shopt -s extglob 00:07:50.521 05:18:04 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.521 05:18:04 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.521 05:18:04 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.521 05:18:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.521 05:18:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.521 05:18:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.521 05:18:04 -- paths/export.sh@5 -- # export PATH 00:07:50.521 05:18:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.521 05:18:04 -- nvmf/common.sh@51 -- # : 0 00:07:50.521 05:18:04 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:50.521 05:18:04 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:50.521 05:18:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:50.521 05:18:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.521 05:18:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.521 05:18:04 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:50.521 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:50.521 05:18:04 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:50.521 05:18:04 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:50.521 05:18:04 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:50.521 05:18:04 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:50.521 05:18:04 -- spdk/autotest.sh@32 -- # uname -s 00:07:50.521 05:18:04 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:50.521 05:18:04 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:50.521 05:18:04 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:50.521 05:18:04 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:07:50.521 05:18:04 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:50.521 05:18:04 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:50.521 05:18:04 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:50.521 05:18:04 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:50.521 05:18:04 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:50.521 05:18:04 -- spdk/autotest.sh@48 -- # udevadm_pid=54505 00:07:50.521 05:18:04 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:50.521 05:18:04 -- pm/common@17 -- # local monitor 00:07:50.521 05:18:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:50.521 05:18:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:50.521 05:18:04 -- pm/common@25 -- # sleep 1 00:07:50.521 05:18:04 -- pm/common@21 -- # date +%s 00:07:50.521 05:18:04 -- pm/common@21 -- # date +%s 00:07:50.521 05:18:04 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732079884 00:07:50.521 05:18:04 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732079884 00:07:50.521 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732079884_collect-cpu-load.pm.log 00:07:50.521 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732079884_collect-vmstat.pm.log 00:07:51.456 05:18:05 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:51.456 05:18:05 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:07:51.456 05:18:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:51.456 05:18:05 -- common/autotest_common.sh@10 -- # set +x 00:07:51.456 05:18:05 -- spdk/autotest.sh@59 -- # create_test_list 00:07:51.456 05:18:05 -- common/autotest_common.sh@750 -- # xtrace_disable 00:07:51.456 05:18:05 -- common/autotest_common.sh@10 -- # set +x 00:07:51.456 05:18:05 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:07:51.456 05:18:05 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:07:51.456 05:18:05 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:07:51.456 05:18:05 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:07:51.456 05:18:05 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:07:51.456 05:18:05 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:07:51.456 05:18:05 -- common/autotest_common.sh@1455 -- # uname 00:07:51.456 05:18:05 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:07:51.456 05:18:05 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:07:51.456 05:18:05 -- common/autotest_common.sh@1475 -- # uname 00:07:51.456 05:18:05 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:07:51.456 05:18:05 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:07:51.456 05:18:05 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:07:51.715 lcov: LCOV version 1.15 00:07:51.715 05:18:06 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:08:13.654 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:08:13.654 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:08:52.362 05:19:03 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:08:52.362 05:19:03 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:52.362 05:19:03 -- common/autotest_common.sh@10 -- # set +x 00:08:52.362 05:19:03 -- spdk/autotest.sh@78 -- # rm -f 00:08:52.362 05:19:03 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:52.362 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:52.362 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:08:52.362 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:08:52.362 05:19:03 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:08:52.362 05:19:03 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:08:52.362 05:19:03 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:08:52.362 05:19:03 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:08:52.362 05:19:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:52.362 05:19:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:08:52.362 05:19:03 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:08:52.362 05:19:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:52.362 05:19:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:52.362 05:19:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:52.362 05:19:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:08:52.362 05:19:03 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:08:52.362 05:19:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:52.362 05:19:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:52.362 05:19:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:52.362 05:19:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:08:52.362 05:19:03 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:08:52.362 05:19:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:52.362 05:19:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:52.362 05:19:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:52.362 05:19:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:08:52.362 05:19:03 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:08:52.362 05:19:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:52.362 05:19:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:52.362 05:19:03 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:08:52.362 05:19:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:52.362 05:19:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:52.362 05:19:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:08:52.362 05:19:03 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:08:52.362 05:19:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:52.362 No valid GPT data, bailing 00:08:52.362 05:19:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:52.362 05:19:04 -- scripts/common.sh@394 -- # pt= 00:08:52.362 05:19:04 -- scripts/common.sh@395 -- # return 1 00:08:52.362 05:19:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:52.362 1+0 records in 00:08:52.362 1+0 records out 00:08:52.362 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00378212 s, 277 MB/s 00:08:52.362 05:19:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:52.362 05:19:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:52.362 05:19:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:08:52.362 05:19:04 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:08:52.362 05:19:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:08:52.362 No valid GPT data, bailing 00:08:52.362 05:19:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:08:52.362 05:19:04 -- scripts/common.sh@394 -- # pt= 00:08:52.362 05:19:04 -- scripts/common.sh@395 -- # return 1 00:08:52.362 05:19:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:08:52.362 1+0 records in 00:08:52.362 1+0 records out 00:08:52.362 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00365361 s, 287 MB/s 00:08:52.362 05:19:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:52.362 05:19:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:52.362 05:19:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:08:52.362 05:19:04 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:08:52.362 05:19:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:08:52.362 No valid GPT data, bailing 00:08:52.362 05:19:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:08:52.362 05:19:04 -- scripts/common.sh@394 -- # pt= 00:08:52.362 05:19:04 -- scripts/common.sh@395 -- # return 1 00:08:52.362 05:19:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:08:52.362 1+0 records in 00:08:52.362 1+0 records out 00:08:52.362 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00552405 s, 190 MB/s 00:08:52.362 05:19:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:52.362 05:19:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:52.362 05:19:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:08:52.362 05:19:04 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:08:52.362 05:19:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:08:52.362 No valid GPT data, bailing 00:08:52.362 05:19:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:08:52.362 05:19:04 -- scripts/common.sh@394 -- # pt= 00:08:52.362 05:19:04 -- scripts/common.sh@395 -- # return 1 00:08:52.362 05:19:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:08:52.362 1+0 records in 00:08:52.362 1+0 records out 00:08:52.362 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00444735 s, 236 MB/s 00:08:52.362 05:19:04 -- spdk/autotest.sh@105 -- # sync 00:08:52.362 05:19:04 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:52.362 05:19:04 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:52.362 05:19:04 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:52.928 05:19:07 -- spdk/autotest.sh@111 -- # uname -s 00:08:52.928 05:19:07 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:08:52.928 05:19:07 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:08:52.928 05:19:07 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:53.899 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:53.899 Hugepages 00:08:53.899 node hugesize free / total 00:08:53.899 node0 1048576kB 0 / 0 00:08:53.899 node0 2048kB 0 / 0 00:08:53.899 00:08:53.899 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:54.157 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:54.157 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:08:54.416 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:08:54.416 05:19:08 -- spdk/autotest.sh@117 -- # uname -s 00:08:54.416 05:19:08 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:08:54.416 05:19:08 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:08:54.416 05:19:08 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:55.792 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:55.792 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:55.792 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:56.051 05:19:10 -- common/autotest_common.sh@1515 -- # sleep 1 00:08:56.986 05:19:11 -- common/autotest_common.sh@1516 -- # bdfs=() 00:08:56.986 05:19:11 -- common/autotest_common.sh@1516 -- # local bdfs 00:08:56.986 05:19:11 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:08:56.986 05:19:11 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:08:56.986 05:19:11 -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:56.986 05:19:11 -- common/autotest_common.sh@1496 -- # local bdfs 00:08:56.986 05:19:11 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:56.986 05:19:11 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:56.986 05:19:11 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:56.986 05:19:11 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:08:56.987 05:19:11 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:56.987 05:19:11 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:57.245 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:57.245 Waiting for block devices as requested 00:08:57.503 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:57.503 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:57.761 05:19:12 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:08:57.761 05:19:12 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:08:57.761 05:19:12 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:57.761 05:19:12 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:08:57.761 05:19:12 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:57.761 05:19:12 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:08:57.761 05:19:12 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:57.761 05:19:12 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:08:57.761 05:19:12 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:08:57.761 05:19:12 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:08:57.761 05:19:12 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:08:57.761 05:19:12 -- common/autotest_common.sh@1529 -- # grep oacs 00:08:57.761 05:19:12 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:08:57.762 05:19:12 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:08:57.762 05:19:12 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:08:57.762 05:19:12 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:08:57.762 05:19:12 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:08:57.762 05:19:12 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:08:57.762 05:19:12 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:08:57.762 05:19:12 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:08:57.762 05:19:12 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:08:57.762 05:19:12 -- common/autotest_common.sh@1541 -- # continue 00:08:57.762 05:19:12 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:08:57.762 05:19:12 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:08:57.762 05:19:12 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:57.762 05:19:12 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:08:57.762 05:19:12 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:57.762 05:19:12 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:08:57.762 05:19:12 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:57.762 05:19:12 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:08:57.762 05:19:12 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:08:57.762 05:19:12 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:08:57.762 05:19:12 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:08:57.762 05:19:12 -- common/autotest_common.sh@1529 -- # grep oacs 00:08:57.762 05:19:12 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:08:57.762 05:19:12 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:08:57.762 05:19:12 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:08:57.762 05:19:12 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:08:57.762 05:19:12 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:08:57.762 05:19:12 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:08:57.762 05:19:12 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:08:57.762 05:19:12 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:08:57.762 05:19:12 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:08:57.762 05:19:12 -- common/autotest_common.sh@1541 -- # continue 00:08:57.762 05:19:12 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:08:57.762 05:19:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:57.762 05:19:12 -- common/autotest_common.sh@10 -- # set +x 00:08:57.762 05:19:12 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:08:57.762 05:19:12 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:57.762 05:19:12 -- common/autotest_common.sh@10 -- # set +x 00:08:57.762 05:19:12 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:58.371 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:58.629 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:58.629 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:58.629 05:19:13 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:58.629 05:19:13 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:58.629 05:19:13 -- common/autotest_common.sh@10 -- # set +x 00:08:58.629 05:19:13 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:58.629 05:19:13 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:08:58.629 05:19:13 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:08:58.629 05:19:13 -- common/autotest_common.sh@1561 -- # bdfs=() 00:08:58.629 05:19:13 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:08:58.630 05:19:13 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:08:58.630 05:19:13 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:08:58.630 05:19:13 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:08:58.630 05:19:13 -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:58.630 05:19:13 -- common/autotest_common.sh@1496 -- # local bdfs 00:08:58.630 05:19:13 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:58.630 05:19:13 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:58.630 05:19:13 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:58.630 05:19:13 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:08:58.630 05:19:13 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:58.630 05:19:13 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:08:58.630 05:19:13 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:08:58.630 05:19:13 -- common/autotest_common.sh@1564 -- # device=0x0010 00:08:58.630 05:19:13 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:58.630 05:19:13 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:08:58.630 05:19:13 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:08:58.630 05:19:13 -- common/autotest_common.sh@1564 -- # device=0x0010 00:08:58.630 05:19:13 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:58.630 05:19:13 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:08:58.630 05:19:13 -- common/autotest_common.sh@1570 -- # return 0 00:08:58.630 05:19:13 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:08:58.630 05:19:13 -- common/autotest_common.sh@1578 -- # return 0 00:08:58.630 05:19:13 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:08:58.630 05:19:13 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:08:58.630 05:19:13 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:58.630 05:19:13 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:58.630 05:19:13 -- spdk/autotest.sh@149 -- # timing_enter lib 00:08:58.630 05:19:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:58.630 05:19:13 -- common/autotest_common.sh@10 -- # set +x 00:08:58.630 05:19:13 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:08:58.630 05:19:13 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:08:58.630 05:19:13 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:08:58.630 05:19:13 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:58.630 05:19:13 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:58.630 05:19:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:58.630 05:19:13 -- common/autotest_common.sh@10 -- # set +x 00:08:58.630 ************************************ 00:08:58.630 START TEST env 00:08:58.630 ************************************ 00:08:58.630 05:19:13 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:58.889 * Looking for test storage... 00:08:58.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:58.889 05:19:13 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:58.889 05:19:13 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:58.889 05:19:13 env -- common/autotest_common.sh@1691 -- # lcov --version 00:08:59.148 05:19:13 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:59.148 05:19:13 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.148 05:19:13 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.148 05:19:13 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.148 05:19:13 env -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.148 05:19:13 env -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.148 05:19:13 env -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.148 05:19:13 env -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.148 05:19:13 env -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.148 05:19:13 env -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.148 05:19:13 env -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.148 05:19:13 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.148 05:19:13 env -- scripts/common.sh@344 -- # case "$op" in 00:08:59.148 05:19:13 env -- scripts/common.sh@345 -- # : 1 00:08:59.148 05:19:13 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.148 05:19:13 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.148 05:19:13 env -- scripts/common.sh@365 -- # decimal 1 00:08:59.148 05:19:13 env -- scripts/common.sh@353 -- # local d=1 00:08:59.148 05:19:13 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.148 05:19:13 env -- scripts/common.sh@355 -- # echo 1 00:08:59.148 05:19:13 env -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.148 05:19:13 env -- scripts/common.sh@366 -- # decimal 2 00:08:59.148 05:19:13 env -- scripts/common.sh@353 -- # local d=2 00:08:59.148 05:19:13 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.148 05:19:13 env -- scripts/common.sh@355 -- # echo 2 00:08:59.148 05:19:13 env -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.148 05:19:13 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.148 05:19:13 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.148 05:19:13 env -- scripts/common.sh@368 -- # return 0 00:08:59.148 05:19:13 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.148 05:19:13 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:59.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.148 --rc genhtml_branch_coverage=1 00:08:59.148 --rc genhtml_function_coverage=1 00:08:59.148 --rc genhtml_legend=1 00:08:59.148 --rc geninfo_all_blocks=1 00:08:59.148 --rc geninfo_unexecuted_blocks=1 00:08:59.148 00:08:59.148 ' 00:08:59.148 05:19:13 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:59.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.148 --rc genhtml_branch_coverage=1 00:08:59.148 --rc genhtml_function_coverage=1 00:08:59.148 --rc genhtml_legend=1 00:08:59.148 --rc geninfo_all_blocks=1 00:08:59.148 --rc geninfo_unexecuted_blocks=1 00:08:59.148 00:08:59.148 ' 00:08:59.148 05:19:13 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:59.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.148 --rc genhtml_branch_coverage=1 00:08:59.148 --rc genhtml_function_coverage=1 00:08:59.148 --rc genhtml_legend=1 00:08:59.148 --rc geninfo_all_blocks=1 00:08:59.148 --rc geninfo_unexecuted_blocks=1 00:08:59.148 00:08:59.148 ' 00:08:59.148 05:19:13 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:59.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.148 --rc genhtml_branch_coverage=1 00:08:59.148 --rc genhtml_function_coverage=1 00:08:59.148 --rc genhtml_legend=1 00:08:59.148 --rc geninfo_all_blocks=1 00:08:59.148 --rc geninfo_unexecuted_blocks=1 00:08:59.148 00:08:59.148 ' 00:08:59.148 05:19:13 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:59.148 05:19:13 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:59.148 05:19:13 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:59.148 05:19:13 env -- common/autotest_common.sh@10 -- # set +x 00:08:59.148 ************************************ 00:08:59.148 START TEST env_memory 00:08:59.148 ************************************ 00:08:59.148 05:19:13 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:59.148 00:08:59.148 00:08:59.148 CUnit - A unit testing framework for C - Version 2.1-3 00:08:59.148 http://cunit.sourceforge.net/ 00:08:59.148 00:08:59.148 00:08:59.148 Suite: memory 00:08:59.148 Test: alloc and free memory map ...[2024-11-20 05:19:13.570503] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:59.148 passed 00:08:59.148 Test: mem map translation ...[2024-11-20 05:19:13.611259] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:59.148 [2024-11-20 05:19:13.613659] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:59.148 [2024-11-20 05:19:13.614392] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:59.148 [2024-11-20 05:19:13.614602] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:59.407 passed 00:08:59.407 Test: mem map registration ...[2024-11-20 05:19:13.686948] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:08:59.407 [2024-11-20 05:19:13.687627] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:08:59.408 passed 00:08:59.408 Test: mem map adjacent registrations ...passed 00:08:59.408 00:08:59.408 Run Summary: Type Total Ran Passed Failed Inactive 00:08:59.408 suites 1 1 n/a 0 0 00:08:59.408 tests 4 4 4 0 0 00:08:59.408 asserts 152 152 152 0 n/a 00:08:59.408 00:08:59.408 Elapsed time = 0.187 seconds 00:08:59.408 ************************************ 00:08:59.408 END TEST env_memory 00:08:59.408 ************************************ 00:08:59.408 00:08:59.408 real 0m0.273s 00:08:59.408 user 0m0.186s 00:08:59.408 sys 0m0.013s 00:08:59.408 05:19:13 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:59.408 05:19:13 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:59.666 05:19:14 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:59.666 05:19:14 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:59.666 05:19:14 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:59.666 05:19:14 env -- common/autotest_common.sh@10 -- # set +x 00:08:59.666 ************************************ 00:08:59.666 START TEST env_vtophys 00:08:59.666 ************************************ 00:08:59.666 05:19:14 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:59.666 EAL: lib.eal log level changed from notice to debug 00:08:59.666 EAL: Detected lcore 0 as core 0 on socket 0 00:08:59.666 EAL: Detected lcore 1 as core 0 on socket 0 00:08:59.666 EAL: Detected lcore 2 as core 0 on socket 0 00:08:59.666 EAL: Detected lcore 3 as core 0 on socket 0 00:08:59.666 EAL: Detected lcore 4 as core 0 on socket 0 00:08:59.666 EAL: Detected lcore 5 as core 0 on socket 0 00:08:59.666 EAL: Detected lcore 6 as core 0 on socket 0 00:08:59.666 EAL: Detected lcore 7 as core 0 on socket 0 00:08:59.666 EAL: Detected lcore 8 as core 0 on socket 0 00:08:59.666 EAL: Detected lcore 9 as core 0 on socket 0 00:08:59.666 EAL: Maximum logical cores by configuration: 128 00:08:59.666 EAL: Detected CPU lcores: 10 00:08:59.666 EAL: Detected NUMA nodes: 1 00:08:59.666 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:59.666 EAL: Detected shared linkage of DPDK 00:08:59.666 EAL: No shared files mode enabled, IPC will be disabled 00:08:59.666 EAL: Selected IOVA mode 'PA' 00:08:59.666 EAL: Probing VFIO support... 00:08:59.666 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:59.666 EAL: VFIO modules not loaded, skipping VFIO support... 00:08:59.666 EAL: Ask a virtual area of 0x2e000 bytes 00:08:59.666 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:59.925 EAL: Setting up physically contiguous memory... 00:08:59.925 EAL: Setting maximum number of open files to 524288 00:08:59.925 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:59.925 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:59.925 EAL: Ask a virtual area of 0x61000 bytes 00:08:59.925 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:59.925 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:59.925 EAL: Ask a virtual area of 0x400000000 bytes 00:08:59.925 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:59.925 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:59.925 EAL: Ask a virtual area of 0x61000 bytes 00:08:59.925 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:59.925 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:59.925 EAL: Ask a virtual area of 0x400000000 bytes 00:08:59.925 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:59.925 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:59.925 EAL: Ask a virtual area of 0x61000 bytes 00:08:59.925 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:59.925 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:59.925 EAL: Ask a virtual area of 0x400000000 bytes 00:08:59.925 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:59.925 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:59.925 EAL: Ask a virtual area of 0x61000 bytes 00:08:59.925 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:59.925 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:59.925 EAL: Ask a virtual area of 0x400000000 bytes 00:08:59.925 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:59.925 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:59.925 EAL: Hugepages will be freed exactly as allocated. 00:08:59.925 EAL: No shared files mode enabled, IPC is disabled 00:08:59.925 EAL: No shared files mode enabled, IPC is disabled 00:08:59.925 EAL: TSC frequency is ~2200000 KHz 00:08:59.925 EAL: Main lcore 0 is ready (tid=7f779161ba00;cpuset=[0]) 00:08:59.925 EAL: Trying to obtain current memory policy. 00:08:59.925 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:59.925 EAL: Restoring previous memory policy: 0 00:08:59.925 EAL: request: mp_malloc_sync 00:08:59.925 EAL: No shared files mode enabled, IPC is disabled 00:08:59.925 EAL: Heap on socket 0 was expanded by 2MB 00:08:59.925 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:59.925 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:59.925 EAL: Mem event callback 'spdk:(nil)' registered 00:08:59.925 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:08:59.925 00:08:59.925 00:08:59.925 CUnit - A unit testing framework for C - Version 2.1-3 00:08:59.925 http://cunit.sourceforge.net/ 00:08:59.925 00:08:59.925 00:08:59.925 Suite: components_suite 00:08:59.925 Test: vtophys_malloc_test ...passed 00:08:59.925 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:59.925 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:59.925 EAL: Restoring previous memory policy: 4 00:08:59.925 EAL: Calling mem event callback 'spdk:(nil)' 00:08:59.926 EAL: request: mp_malloc_sync 00:08:59.926 EAL: No shared files mode enabled, IPC is disabled 00:08:59.926 EAL: Heap on socket 0 was expanded by 4MB 00:08:59.926 EAL: Calling mem event callback 'spdk:(nil)' 00:08:59.926 EAL: request: mp_malloc_sync 00:08:59.926 EAL: No shared files mode enabled, IPC is disabled 00:08:59.926 EAL: Heap on socket 0 was shrunk by 4MB 00:08:59.926 EAL: Trying to obtain current memory policy. 00:08:59.926 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:59.926 EAL: Restoring previous memory policy: 4 00:08:59.926 EAL: Calling mem event callback 'spdk:(nil)' 00:08:59.926 EAL: request: mp_malloc_sync 00:08:59.926 EAL: No shared files mode enabled, IPC is disabled 00:08:59.926 EAL: Heap on socket 0 was expanded by 6MB 00:08:59.926 EAL: Calling mem event callback 'spdk:(nil)' 00:08:59.926 EAL: request: mp_malloc_sync 00:08:59.926 EAL: No shared files mode enabled, IPC is disabled 00:08:59.926 EAL: Heap on socket 0 was shrunk by 6MB 00:08:59.926 EAL: Trying to obtain current memory policy. 00:08:59.926 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:59.926 EAL: Restoring previous memory policy: 4 00:08:59.926 EAL: Calling mem event callback 'spdk:(nil)' 00:08:59.926 EAL: request: mp_malloc_sync 00:08:59.926 EAL: No shared files mode enabled, IPC is disabled 00:08:59.926 EAL: Heap on socket 0 was expanded by 10MB 00:08:59.926 EAL: Calling mem event callback 'spdk:(nil)' 00:08:59.926 EAL: request: mp_malloc_sync 00:08:59.926 EAL: No shared files mode enabled, IPC is disabled 00:08:59.926 EAL: Heap on socket 0 was shrunk by 10MB 00:08:59.926 EAL: Trying to obtain current memory policy. 00:08:59.926 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:59.926 EAL: Restoring previous memory policy: 4 00:08:59.926 EAL: Calling mem event callback 'spdk:(nil)' 00:08:59.926 EAL: request: mp_malloc_sync 00:08:59.926 EAL: No shared files mode enabled, IPC is disabled 00:08:59.926 EAL: Heap on socket 0 was expanded by 18MB 00:08:59.926 EAL: Calling mem event callback 'spdk:(nil)' 00:08:59.926 EAL: request: mp_malloc_sync 00:08:59.926 EAL: No shared files mode enabled, IPC is disabled 00:08:59.926 EAL: Heap on socket 0 was shrunk by 18MB 00:08:59.926 EAL: Trying to obtain current memory policy. 00:08:59.926 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:59.926 EAL: Restoring previous memory policy: 4 00:08:59.926 EAL: Calling mem event callback 'spdk:(nil)' 00:08:59.926 EAL: request: mp_malloc_sync 00:08:59.926 EAL: No shared files mode enabled, IPC is disabled 00:08:59.926 EAL: Heap on socket 0 was expanded by 34MB 00:08:59.926 EAL: Calling mem event callback 'spdk:(nil)' 00:08:59.926 EAL: request: mp_malloc_sync 00:08:59.926 EAL: No shared files mode enabled, IPC is disabled 00:08:59.926 EAL: Heap on socket 0 was shrunk by 34MB 00:08:59.926 EAL: Trying to obtain current memory policy. 00:08:59.926 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:59.926 EAL: Restoring previous memory policy: 4 00:08:59.926 EAL: Calling mem event callback 'spdk:(nil)' 00:08:59.926 EAL: request: mp_malloc_sync 00:08:59.926 EAL: No shared files mode enabled, IPC is disabled 00:08:59.926 EAL: Heap on socket 0 was expanded by 66MB 00:08:59.926 EAL: Calling mem event callback 'spdk:(nil)' 00:08:59.926 EAL: request: mp_malloc_sync 00:08:59.926 EAL: No shared files mode enabled, IPC is disabled 00:08:59.926 EAL: Heap on socket 0 was shrunk by 66MB 00:08:59.926 EAL: Trying to obtain current memory policy. 00:08:59.926 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:59.926 EAL: Restoring previous memory policy: 4 00:08:59.926 EAL: Calling mem event callback 'spdk:(nil)' 00:08:59.926 EAL: request: mp_malloc_sync 00:08:59.926 EAL: No shared files mode enabled, IPC is disabled 00:08:59.926 EAL: Heap on socket 0 was expanded by 130MB 00:08:59.926 EAL: Calling mem event callback 'spdk:(nil)' 00:09:00.186 EAL: request: mp_malloc_sync 00:09:00.186 EAL: No shared files mode enabled, IPC is disabled 00:09:00.186 EAL: Heap on socket 0 was shrunk by 130MB 00:09:00.186 EAL: Trying to obtain current memory policy. 00:09:00.186 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:00.186 EAL: Restoring previous memory policy: 4 00:09:00.186 EAL: Calling mem event callback 'spdk:(nil)' 00:09:00.186 EAL: request: mp_malloc_sync 00:09:00.186 EAL: No shared files mode enabled, IPC is disabled 00:09:00.186 EAL: Heap on socket 0 was expanded by 258MB 00:09:00.186 EAL: Calling mem event callback 'spdk:(nil)' 00:09:00.186 EAL: request: mp_malloc_sync 00:09:00.186 EAL: No shared files mode enabled, IPC is disabled 00:09:00.186 EAL: Heap on socket 0 was shrunk by 258MB 00:09:00.186 EAL: Trying to obtain current memory policy. 00:09:00.186 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:00.186 EAL: Restoring previous memory policy: 4 00:09:00.186 EAL: Calling mem event callback 'spdk:(nil)' 00:09:00.186 EAL: request: mp_malloc_sync 00:09:00.186 EAL: No shared files mode enabled, IPC is disabled 00:09:00.186 EAL: Heap on socket 0 was expanded by 514MB 00:09:00.186 EAL: Calling mem event callback 'spdk:(nil)' 00:09:00.445 EAL: request: mp_malloc_sync 00:09:00.445 EAL: No shared files mode enabled, IPC is disabled 00:09:00.445 EAL: Heap on socket 0 was shrunk by 514MB 00:09:00.445 EAL: Trying to obtain current memory policy. 00:09:00.445 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:00.445 EAL: Restoring previous memory policy: 4 00:09:00.445 EAL: Calling mem event callback 'spdk:(nil)' 00:09:00.446 EAL: request: mp_malloc_sync 00:09:00.446 EAL: No shared files mode enabled, IPC is disabled 00:09:00.446 EAL: Heap on socket 0 was expanded by 1026MB 00:09:00.704 EAL: Calling mem event callback 'spdk:(nil)' 00:09:00.704 EAL: request: mp_malloc_sync 00:09:00.704 EAL: No shared files mode enabled, IPC is disabled 00:09:00.704 passedEAL: Heap on socket 0 was shrunk by 1026MB 00:09:00.704 00:09:00.704 00:09:00.704 Run Summary: Type Total Ran Passed Failed Inactive 00:09:00.704 suites 1 1 n/a 0 0 00:09:00.704 tests 2 2 2 0 0 00:09:00.704 asserts 5491 5491 5491 0 n/a 00:09:00.704 00:09:00.704 Elapsed time = 0.791 seconds 00:09:00.704 EAL: Calling mem event callback 'spdk:(nil)' 00:09:00.704 EAL: request: mp_malloc_sync 00:09:00.704 EAL: No shared files mode enabled, IPC is disabled 00:09:00.704 EAL: Heap on socket 0 was shrunk by 2MB 00:09:00.704 EAL: No shared files mode enabled, IPC is disabled 00:09:00.704 EAL: No shared files mode enabled, IPC is disabled 00:09:00.704 EAL: No shared files mode enabled, IPC is disabled 00:09:00.704 ************************************ 00:09:00.704 END TEST env_vtophys 00:09:00.704 ************************************ 00:09:00.704 00:09:00.704 real 0m1.093s 00:09:00.704 user 0m0.503s 00:09:00.704 sys 0m0.399s 00:09:00.704 05:19:15 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:00.704 05:19:15 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:00.963 05:19:15 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:00.963 05:19:15 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:00.963 05:19:15 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:00.963 05:19:15 env -- common/autotest_common.sh@10 -- # set +x 00:09:00.963 ************************************ 00:09:00.963 START TEST env_pci 00:09:00.963 ************************************ 00:09:00.963 05:19:15 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:00.963 00:09:00.963 00:09:00.963 CUnit - A unit testing framework for C - Version 2.1-3 00:09:00.963 http://cunit.sourceforge.net/ 00:09:00.963 00:09:00.963 00:09:00.963 Suite: pci 00:09:00.963 Test: pci_hook ...[2024-11-20 05:19:15.304617] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57028 has claimed it 00:09:00.963 passed 00:09:00.963 00:09:00.963 Run Summary: Type Total Ran Passed Failed Inactive 00:09:00.963 suites 1 1 n/a 0 0 00:09:00.963 tests 1 1 1 0 0 00:09:00.963 asserts 25 25 25 0 n/a 00:09:00.963 00:09:00.963 Elapsed time = 0.003 seconds 00:09:00.963 EAL: Cannot find device (10000:00:01.0) 00:09:00.963 EAL: Failed to attach device on primary process 00:09:00.963 00:09:00.963 real 0m0.023s 00:09:00.963 user 0m0.009s 00:09:00.963 sys 0m0.013s 00:09:00.963 05:19:15 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:00.963 ************************************ 00:09:00.963 END TEST env_pci 00:09:00.963 ************************************ 00:09:00.963 05:19:15 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:00.963 05:19:15 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:00.963 05:19:15 env -- env/env.sh@15 -- # uname 00:09:00.963 05:19:15 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:00.963 05:19:15 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:00.963 05:19:15 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:00.963 05:19:15 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:00.963 05:19:15 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:00.963 05:19:15 env -- common/autotest_common.sh@10 -- # set +x 00:09:00.963 ************************************ 00:09:00.963 START TEST env_dpdk_post_init 00:09:00.963 ************************************ 00:09:00.963 05:19:15 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:00.963 EAL: Detected CPU lcores: 10 00:09:00.963 EAL: Detected NUMA nodes: 1 00:09:00.963 EAL: Detected shared linkage of DPDK 00:09:00.963 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:00.963 EAL: Selected IOVA mode 'PA' 00:09:01.221 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:01.221 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:09:01.221 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:09:01.221 Starting DPDK initialization... 00:09:01.221 Starting SPDK post initialization... 00:09:01.221 SPDK NVMe probe 00:09:01.221 Attaching to 0000:00:10.0 00:09:01.221 Attaching to 0000:00:11.0 00:09:01.221 Attached to 0000:00:10.0 00:09:01.221 Attached to 0000:00:11.0 00:09:01.221 Cleaning up... 00:09:01.221 00:09:01.221 real 0m0.242s 00:09:01.221 user 0m0.056s 00:09:01.222 sys 0m0.062s 00:09:01.222 05:19:15 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:01.222 ************************************ 00:09:01.222 END TEST env_dpdk_post_init 00:09:01.222 ************************************ 00:09:01.222 05:19:15 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:01.484 05:19:15 env -- env/env.sh@26 -- # uname 00:09:01.484 05:19:15 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:01.484 05:19:15 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:01.484 05:19:15 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:01.484 05:19:15 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:01.484 05:19:15 env -- common/autotest_common.sh@10 -- # set +x 00:09:01.484 ************************************ 00:09:01.484 START TEST env_mem_callbacks 00:09:01.484 ************************************ 00:09:01.484 05:19:15 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:01.484 EAL: Detected CPU lcores: 10 00:09:01.484 EAL: Detected NUMA nodes: 1 00:09:01.484 EAL: Detected shared linkage of DPDK 00:09:01.484 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:01.484 EAL: Selected IOVA mode 'PA' 00:09:01.484 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:01.484 00:09:01.484 00:09:01.484 CUnit - A unit testing framework for C - Version 2.1-3 00:09:01.484 http://cunit.sourceforge.net/ 00:09:01.484 00:09:01.484 00:09:01.484 Suite: memory 00:09:01.484 Test: test ... 00:09:01.484 register 0x200000200000 2097152 00:09:01.484 malloc 3145728 00:09:01.484 register 0x200000400000 4194304 00:09:01.484 buf 0x200000500000 len 3145728 PASSED 00:09:01.484 malloc 64 00:09:01.484 buf 0x2000004fff40 len 64 PASSED 00:09:01.484 malloc 4194304 00:09:01.484 register 0x200000800000 6291456 00:09:01.484 buf 0x200000a00000 len 4194304 PASSED 00:09:01.484 free 0x200000500000 3145728 00:09:01.484 free 0x2000004fff40 64 00:09:01.484 unregister 0x200000400000 4194304 PASSED 00:09:01.484 free 0x200000a00000 4194304 00:09:01.484 unregister 0x200000800000 6291456 PASSED 00:09:01.484 malloc 8388608 00:09:01.484 register 0x200000400000 10485760 00:09:01.484 buf 0x200000600000 len 8388608 PASSED 00:09:01.484 free 0x200000600000 8388608 00:09:01.484 unregister 0x200000400000 10485760 PASSED 00:09:01.484 passed 00:09:01.484 00:09:01.484 Run Summary: Type Total Ran Passed Failed Inactive 00:09:01.484 suites 1 1 n/a 0 0 00:09:01.484 tests 1 1 1 0 0 00:09:01.484 asserts 15 15 15 0 n/a 00:09:01.484 00:09:01.484 Elapsed time = 0.006 seconds 00:09:01.484 00:09:01.484 real 0m0.157s 00:09:01.484 user 0m0.016s 00:09:01.484 sys 0m0.027s 00:09:01.484 05:19:15 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:01.484 ************************************ 00:09:01.484 END TEST env_mem_callbacks 00:09:01.484 ************************************ 00:09:01.484 05:19:15 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:01.742 00:09:01.742 real 0m2.905s 00:09:01.742 user 0m0.976s 00:09:01.742 sys 0m0.822s 00:09:01.742 05:19:16 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:01.742 ************************************ 00:09:01.742 END TEST env 00:09:01.742 ************************************ 00:09:01.742 05:19:16 env -- common/autotest_common.sh@10 -- # set +x 00:09:01.743 05:19:16 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:01.743 05:19:16 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:01.743 05:19:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:01.743 05:19:16 -- common/autotest_common.sh@10 -- # set +x 00:09:01.743 ************************************ 00:09:01.743 START TEST rpc 00:09:01.743 ************************************ 00:09:01.743 05:19:16 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:01.743 * Looking for test storage... 00:09:01.743 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:01.743 05:19:16 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:01.743 05:19:16 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:09:01.743 05:19:16 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:02.002 05:19:16 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:02.002 05:19:16 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.002 05:19:16 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.002 05:19:16 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.002 05:19:16 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.002 05:19:16 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.002 05:19:16 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.002 05:19:16 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.002 05:19:16 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.002 05:19:16 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.002 05:19:16 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.002 05:19:16 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.002 05:19:16 rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:02.002 05:19:16 rpc -- scripts/common.sh@345 -- # : 1 00:09:02.002 05:19:16 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.002 05:19:16 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.002 05:19:16 rpc -- scripts/common.sh@365 -- # decimal 1 00:09:02.002 05:19:16 rpc -- scripts/common.sh@353 -- # local d=1 00:09:02.002 05:19:16 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.002 05:19:16 rpc -- scripts/common.sh@355 -- # echo 1 00:09:02.002 05:19:16 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.002 05:19:16 rpc -- scripts/common.sh@366 -- # decimal 2 00:09:02.002 05:19:16 rpc -- scripts/common.sh@353 -- # local d=2 00:09:02.002 05:19:16 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.002 05:19:16 rpc -- scripts/common.sh@355 -- # echo 2 00:09:02.002 05:19:16 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.002 05:19:16 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.002 05:19:16 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.002 05:19:16 rpc -- scripts/common.sh@368 -- # return 0 00:09:02.002 05:19:16 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.002 05:19:16 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:02.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.002 --rc genhtml_branch_coverage=1 00:09:02.002 --rc genhtml_function_coverage=1 00:09:02.002 --rc genhtml_legend=1 00:09:02.002 --rc geninfo_all_blocks=1 00:09:02.002 --rc geninfo_unexecuted_blocks=1 00:09:02.002 00:09:02.002 ' 00:09:02.002 05:19:16 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:02.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.002 --rc genhtml_branch_coverage=1 00:09:02.002 --rc genhtml_function_coverage=1 00:09:02.002 --rc genhtml_legend=1 00:09:02.002 --rc geninfo_all_blocks=1 00:09:02.002 --rc geninfo_unexecuted_blocks=1 00:09:02.002 00:09:02.002 ' 00:09:02.002 05:19:16 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:02.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.002 --rc genhtml_branch_coverage=1 00:09:02.002 --rc genhtml_function_coverage=1 00:09:02.002 --rc genhtml_legend=1 00:09:02.002 --rc geninfo_all_blocks=1 00:09:02.002 --rc geninfo_unexecuted_blocks=1 00:09:02.002 00:09:02.002 ' 00:09:02.002 05:19:16 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:02.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.002 --rc genhtml_branch_coverage=1 00:09:02.002 --rc genhtml_function_coverage=1 00:09:02.002 --rc genhtml_legend=1 00:09:02.002 --rc geninfo_all_blocks=1 00:09:02.002 --rc geninfo_unexecuted_blocks=1 00:09:02.002 00:09:02.002 ' 00:09:02.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.002 05:19:16 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57151 00:09:02.002 05:19:16 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:02.002 05:19:16 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57151 00:09:02.002 05:19:16 rpc -- common/autotest_common.sh@833 -- # '[' -z 57151 ']' 00:09:02.002 05:19:16 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.002 05:19:16 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:02.002 05:19:16 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.002 05:19:16 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:02.002 05:19:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.002 05:19:16 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:02.280 [2024-11-20 05:19:16.646301] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:09:02.280 [2024-11-20 05:19:16.646440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57151 ] 00:09:02.545 [2024-11-20 05:19:16.837897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.545 [2024-11-20 05:19:16.888404] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:02.545 [2024-11-20 05:19:16.888490] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57151' to capture a snapshot of events at runtime. 00:09:02.545 [2024-11-20 05:19:16.888509] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.546 [2024-11-20 05:19:16.888522] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.546 [2024-11-20 05:19:16.888534] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57151 for offline analysis/debug. 00:09:02.546 [2024-11-20 05:19:16.889007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.546 [2024-11-20 05:19:16.942384] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:02.804 05:19:17 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:02.804 05:19:17 rpc -- common/autotest_common.sh@866 -- # return 0 00:09:02.804 05:19:17 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:02.804 05:19:17 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:02.804 05:19:17 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:02.804 05:19:17 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:02.804 05:19:17 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:02.804 05:19:17 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:02.804 05:19:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.804 ************************************ 00:09:02.804 START TEST rpc_integrity 00:09:02.804 ************************************ 00:09:02.804 05:19:17 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:09:02.804 05:19:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:02.804 05:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.804 05:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:02.804 05:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.804 05:19:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:02.804 05:19:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:02.804 05:19:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:02.804 05:19:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:02.804 05:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.804 05:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:03.063 05:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.063 05:19:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:03.063 05:19:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:03.063 05:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.063 05:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:03.063 05:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.063 05:19:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:03.063 { 00:09:03.063 "name": "Malloc0", 00:09:03.063 "aliases": [ 00:09:03.063 "41004840-4da0-43d3-a666-31eafc8c9800" 00:09:03.063 ], 00:09:03.063 "product_name": "Malloc disk", 00:09:03.063 "block_size": 512, 00:09:03.063 "num_blocks": 16384, 00:09:03.063 "uuid": "41004840-4da0-43d3-a666-31eafc8c9800", 00:09:03.063 "assigned_rate_limits": { 00:09:03.063 "rw_ios_per_sec": 0, 00:09:03.063 "rw_mbytes_per_sec": 0, 00:09:03.063 "r_mbytes_per_sec": 0, 00:09:03.063 "w_mbytes_per_sec": 0 00:09:03.063 }, 00:09:03.063 "claimed": false, 00:09:03.063 "zoned": false, 00:09:03.063 "supported_io_types": { 00:09:03.063 "read": true, 00:09:03.063 "write": true, 00:09:03.063 "unmap": true, 00:09:03.063 "flush": true, 00:09:03.063 "reset": true, 00:09:03.063 "nvme_admin": false, 00:09:03.063 "nvme_io": false, 00:09:03.063 "nvme_io_md": false, 00:09:03.063 "write_zeroes": true, 00:09:03.063 "zcopy": true, 00:09:03.063 "get_zone_info": false, 00:09:03.063 "zone_management": false, 00:09:03.063 "zone_append": false, 00:09:03.063 "compare": false, 00:09:03.063 "compare_and_write": false, 00:09:03.063 "abort": true, 00:09:03.063 "seek_hole": false, 00:09:03.063 "seek_data": false, 00:09:03.063 "copy": true, 00:09:03.063 "nvme_iov_md": false 00:09:03.063 }, 00:09:03.063 "memory_domains": [ 00:09:03.063 { 00:09:03.063 "dma_device_id": "system", 00:09:03.063 "dma_device_type": 1 00:09:03.063 }, 00:09:03.063 { 00:09:03.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.064 "dma_device_type": 2 00:09:03.064 } 00:09:03.064 ], 00:09:03.064 "driver_specific": {} 00:09:03.064 } 00:09:03.064 ]' 00:09:03.064 05:19:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:03.064 05:19:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:03.064 05:19:17 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:03.064 05:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.064 05:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:03.064 [2024-11-20 05:19:17.434463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:03.064 [2024-11-20 05:19:17.434559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.064 [2024-11-20 05:19:17.434591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7b0050 00:09:03.064 [2024-11-20 05:19:17.434605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.064 [2024-11-20 05:19:17.436656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.064 [2024-11-20 05:19:17.436713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:03.064 Passthru0 00:09:03.064 05:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.064 05:19:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:03.064 05:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.064 05:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:03.064 05:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.064 05:19:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:03.064 { 00:09:03.064 "name": "Malloc0", 00:09:03.064 "aliases": [ 00:09:03.064 "41004840-4da0-43d3-a666-31eafc8c9800" 00:09:03.064 ], 00:09:03.064 "product_name": "Malloc disk", 00:09:03.064 "block_size": 512, 00:09:03.064 "num_blocks": 16384, 00:09:03.064 "uuid": "41004840-4da0-43d3-a666-31eafc8c9800", 00:09:03.064 "assigned_rate_limits": { 00:09:03.064 "rw_ios_per_sec": 0, 00:09:03.064 "rw_mbytes_per_sec": 0, 00:09:03.064 "r_mbytes_per_sec": 0, 00:09:03.064 "w_mbytes_per_sec": 0 00:09:03.064 }, 00:09:03.064 "claimed": true, 00:09:03.064 "claim_type": "exclusive_write", 00:09:03.064 "zoned": false, 00:09:03.064 "supported_io_types": { 00:09:03.064 "read": true, 00:09:03.064 "write": true, 00:09:03.064 "unmap": true, 00:09:03.064 "flush": true, 00:09:03.064 "reset": true, 00:09:03.064 "nvme_admin": false, 00:09:03.064 "nvme_io": false, 00:09:03.064 "nvme_io_md": false, 00:09:03.064 "write_zeroes": true, 00:09:03.064 "zcopy": true, 00:09:03.064 "get_zone_info": false, 00:09:03.064 "zone_management": false, 00:09:03.064 "zone_append": false, 00:09:03.064 "compare": false, 00:09:03.064 "compare_and_write": false, 00:09:03.064 "abort": true, 00:09:03.064 "seek_hole": false, 00:09:03.064 "seek_data": false, 00:09:03.064 "copy": true, 00:09:03.064 "nvme_iov_md": false 00:09:03.064 }, 00:09:03.064 "memory_domains": [ 00:09:03.064 { 00:09:03.064 "dma_device_id": "system", 00:09:03.064 "dma_device_type": 1 00:09:03.064 }, 00:09:03.064 { 00:09:03.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.064 "dma_device_type": 2 00:09:03.064 } 00:09:03.064 ], 00:09:03.064 "driver_specific": {} 00:09:03.064 }, 00:09:03.064 { 00:09:03.064 "name": "Passthru0", 00:09:03.064 "aliases": [ 00:09:03.064 "59bafee8-e1db-5160-9cdd-063ea15bafe6" 00:09:03.064 ], 00:09:03.064 "product_name": "passthru", 00:09:03.064 "block_size": 512, 00:09:03.064 "num_blocks": 16384, 00:09:03.064 "uuid": "59bafee8-e1db-5160-9cdd-063ea15bafe6", 00:09:03.064 "assigned_rate_limits": { 00:09:03.064 "rw_ios_per_sec": 0, 00:09:03.064 "rw_mbytes_per_sec": 0, 00:09:03.064 "r_mbytes_per_sec": 0, 00:09:03.064 "w_mbytes_per_sec": 0 00:09:03.064 }, 00:09:03.064 "claimed": false, 00:09:03.064 "zoned": false, 00:09:03.064 "supported_io_types": { 00:09:03.064 "read": true, 00:09:03.064 "write": true, 00:09:03.064 "unmap": true, 00:09:03.064 "flush": true, 00:09:03.064 "reset": true, 00:09:03.064 "nvme_admin": false, 00:09:03.064 "nvme_io": false, 00:09:03.064 "nvme_io_md": false, 00:09:03.064 "write_zeroes": true, 00:09:03.064 "zcopy": true, 00:09:03.064 "get_zone_info": false, 00:09:03.064 "zone_management": false, 00:09:03.064 "zone_append": false, 00:09:03.064 "compare": false, 00:09:03.064 "compare_and_write": false, 00:09:03.064 "abort": true, 00:09:03.064 "seek_hole": false, 00:09:03.064 "seek_data": false, 00:09:03.064 "copy": true, 00:09:03.064 "nvme_iov_md": false 00:09:03.064 }, 00:09:03.064 "memory_domains": [ 00:09:03.064 { 00:09:03.064 "dma_device_id": "system", 00:09:03.064 "dma_device_type": 1 00:09:03.064 }, 00:09:03.064 { 00:09:03.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.064 "dma_device_type": 2 00:09:03.064 } 00:09:03.064 ], 00:09:03.064 "driver_specific": { 00:09:03.064 "passthru": { 00:09:03.064 "name": "Passthru0", 00:09:03.064 "base_bdev_name": "Malloc0" 00:09:03.064 } 00:09:03.064 } 00:09:03.064 } 00:09:03.064 ]' 00:09:03.064 05:19:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:03.064 05:19:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:03.064 05:19:17 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:03.064 05:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.064 05:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:03.064 05:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.064 05:19:17 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:03.064 05:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.064 05:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:03.334 05:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.334 05:19:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:03.334 05:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.334 05:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:03.334 05:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.334 05:19:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:03.334 05:19:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:03.334 ************************************ 00:09:03.334 END TEST rpc_integrity 00:09:03.334 ************************************ 00:09:03.334 05:19:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:03.334 00:09:03.334 real 0m0.473s 00:09:03.334 user 0m0.266s 00:09:03.334 sys 0m0.070s 00:09:03.334 05:19:17 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:03.334 05:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:03.334 05:19:17 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:03.334 05:19:17 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:03.334 05:19:17 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:03.334 05:19:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.334 ************************************ 00:09:03.334 START TEST rpc_plugins 00:09:03.334 ************************************ 00:09:03.334 05:19:17 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:09:03.334 05:19:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:03.334 05:19:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.334 05:19:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:03.334 05:19:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.334 05:19:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:03.334 05:19:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:03.334 05:19:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.334 05:19:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:03.593 05:19:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.593 05:19:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:03.593 { 00:09:03.593 "name": "Malloc1", 00:09:03.593 "aliases": [ 00:09:03.593 "98192295-1578-4ef2-b856-ff04a90aa7bd" 00:09:03.593 ], 00:09:03.593 "product_name": "Malloc disk", 00:09:03.593 "block_size": 4096, 00:09:03.593 "num_blocks": 256, 00:09:03.593 "uuid": "98192295-1578-4ef2-b856-ff04a90aa7bd", 00:09:03.593 "assigned_rate_limits": { 00:09:03.593 "rw_ios_per_sec": 0, 00:09:03.593 "rw_mbytes_per_sec": 0, 00:09:03.593 "r_mbytes_per_sec": 0, 00:09:03.593 "w_mbytes_per_sec": 0 00:09:03.593 }, 00:09:03.593 "claimed": false, 00:09:03.593 "zoned": false, 00:09:03.593 "supported_io_types": { 00:09:03.593 "read": true, 00:09:03.593 "write": true, 00:09:03.593 "unmap": true, 00:09:03.593 "flush": true, 00:09:03.593 "reset": true, 00:09:03.593 "nvme_admin": false, 00:09:03.593 "nvme_io": false, 00:09:03.593 "nvme_io_md": false, 00:09:03.593 "write_zeroes": true, 00:09:03.593 "zcopy": true, 00:09:03.593 "get_zone_info": false, 00:09:03.593 "zone_management": false, 00:09:03.593 "zone_append": false, 00:09:03.593 "compare": false, 00:09:03.593 "compare_and_write": false, 00:09:03.593 "abort": true, 00:09:03.593 "seek_hole": false, 00:09:03.593 "seek_data": false, 00:09:03.593 "copy": true, 00:09:03.593 "nvme_iov_md": false 00:09:03.593 }, 00:09:03.593 "memory_domains": [ 00:09:03.593 { 00:09:03.593 "dma_device_id": "system", 00:09:03.593 "dma_device_type": 1 00:09:03.593 }, 00:09:03.593 { 00:09:03.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.593 "dma_device_type": 2 00:09:03.593 } 00:09:03.593 ], 00:09:03.593 "driver_specific": {} 00:09:03.593 } 00:09:03.593 ]' 00:09:03.593 05:19:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:03.593 05:19:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:03.594 05:19:17 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:03.594 05:19:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.594 05:19:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:03.594 05:19:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.594 05:19:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:03.594 05:19:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.594 05:19:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:03.594 05:19:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.594 05:19:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:03.594 05:19:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:03.594 05:19:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:03.594 ************************************ 00:09:03.594 END TEST rpc_plugins 00:09:03.594 ************************************ 00:09:03.594 00:09:03.594 real 0m0.243s 00:09:03.594 user 0m0.136s 00:09:03.594 sys 0m0.018s 00:09:03.594 05:19:18 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:03.594 05:19:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:03.852 05:19:18 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:03.852 05:19:18 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:03.852 05:19:18 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:03.852 05:19:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.852 ************************************ 00:09:03.852 START TEST rpc_trace_cmd_test 00:09:03.852 ************************************ 00:09:03.852 05:19:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:09:03.852 05:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:03.852 05:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:03.852 05:19:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.852 05:19:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.852 05:19:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.852 05:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:03.852 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57151", 00:09:03.852 "tpoint_group_mask": "0x8", 00:09:03.852 "iscsi_conn": { 00:09:03.852 "mask": "0x2", 00:09:03.852 "tpoint_mask": "0x0" 00:09:03.852 }, 00:09:03.852 "scsi": { 00:09:03.852 "mask": "0x4", 00:09:03.852 "tpoint_mask": "0x0" 00:09:03.852 }, 00:09:03.852 "bdev": { 00:09:03.852 "mask": "0x8", 00:09:03.852 "tpoint_mask": "0xffffffffffffffff" 00:09:03.852 }, 00:09:03.852 "nvmf_rdma": { 00:09:03.852 "mask": "0x10", 00:09:03.852 "tpoint_mask": "0x0" 00:09:03.852 }, 00:09:03.852 "nvmf_tcp": { 00:09:03.852 "mask": "0x20", 00:09:03.852 "tpoint_mask": "0x0" 00:09:03.852 }, 00:09:03.852 "ftl": { 00:09:03.852 "mask": "0x40", 00:09:03.852 "tpoint_mask": "0x0" 00:09:03.852 }, 00:09:03.852 "blobfs": { 00:09:03.852 "mask": "0x80", 00:09:03.852 "tpoint_mask": "0x0" 00:09:03.852 }, 00:09:03.852 "dsa": { 00:09:03.852 "mask": "0x200", 00:09:03.852 "tpoint_mask": "0x0" 00:09:03.852 }, 00:09:03.852 "thread": { 00:09:03.852 "mask": "0x400", 00:09:03.852 "tpoint_mask": "0x0" 00:09:03.852 }, 00:09:03.852 "nvme_pcie": { 00:09:03.853 "mask": "0x800", 00:09:03.853 "tpoint_mask": "0x0" 00:09:03.853 }, 00:09:03.853 "iaa": { 00:09:03.853 "mask": "0x1000", 00:09:03.853 "tpoint_mask": "0x0" 00:09:03.853 }, 00:09:03.853 "nvme_tcp": { 00:09:03.853 "mask": "0x2000", 00:09:03.853 "tpoint_mask": "0x0" 00:09:03.853 }, 00:09:03.853 "bdev_nvme": { 00:09:03.853 "mask": "0x4000", 00:09:03.853 "tpoint_mask": "0x0" 00:09:03.853 }, 00:09:03.853 "sock": { 00:09:03.853 "mask": "0x8000", 00:09:03.853 "tpoint_mask": "0x0" 00:09:03.853 }, 00:09:03.853 "blob": { 00:09:03.853 "mask": "0x10000", 00:09:03.853 "tpoint_mask": "0x0" 00:09:03.853 }, 00:09:03.853 "bdev_raid": { 00:09:03.853 "mask": "0x20000", 00:09:03.853 "tpoint_mask": "0x0" 00:09:03.853 }, 00:09:03.853 "scheduler": { 00:09:03.853 "mask": "0x40000", 00:09:03.853 "tpoint_mask": "0x0" 00:09:03.853 } 00:09:03.853 }' 00:09:03.853 05:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:03.853 05:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:09:03.853 05:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:03.853 05:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:03.853 05:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:04.111 05:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:04.111 05:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:04.111 05:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:04.111 05:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:04.111 ************************************ 00:09:04.111 END TEST rpc_trace_cmd_test 00:09:04.111 ************************************ 00:09:04.111 05:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:04.111 00:09:04.111 real 0m0.396s 00:09:04.111 user 0m0.287s 00:09:04.111 sys 0m0.052s 00:09:04.111 05:19:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:04.111 05:19:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.412 05:19:18 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:04.412 05:19:18 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:04.412 05:19:18 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:04.412 05:19:18 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:04.412 05:19:18 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:04.412 05:19:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.412 ************************************ 00:09:04.412 START TEST rpc_daemon_integrity 00:09:04.412 ************************************ 00:09:04.412 05:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:09:04.412 05:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:04.412 05:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.412 05:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:04.412 05:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.412 05:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:04.412 05:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:04.412 05:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:04.412 05:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:04.412 05:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.412 05:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:04.412 05:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.412 05:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:04.412 05:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:04.412 05:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.412 05:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:04.412 05:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.412 05:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:04.412 { 00:09:04.412 "name": "Malloc2", 00:09:04.412 "aliases": [ 00:09:04.412 "c6664b87-df5b-4880-bd4c-965b80ce061e" 00:09:04.412 ], 00:09:04.412 "product_name": "Malloc disk", 00:09:04.412 "block_size": 512, 00:09:04.412 "num_blocks": 16384, 00:09:04.412 "uuid": "c6664b87-df5b-4880-bd4c-965b80ce061e", 00:09:04.412 "assigned_rate_limits": { 00:09:04.412 "rw_ios_per_sec": 0, 00:09:04.412 "rw_mbytes_per_sec": 0, 00:09:04.412 "r_mbytes_per_sec": 0, 00:09:04.412 "w_mbytes_per_sec": 0 00:09:04.412 }, 00:09:04.412 "claimed": false, 00:09:04.412 "zoned": false, 00:09:04.412 "supported_io_types": { 00:09:04.412 "read": true, 00:09:04.412 "write": true, 00:09:04.412 "unmap": true, 00:09:04.412 "flush": true, 00:09:04.412 "reset": true, 00:09:04.412 "nvme_admin": false, 00:09:04.412 "nvme_io": false, 00:09:04.412 "nvme_io_md": false, 00:09:04.412 "write_zeroes": true, 00:09:04.412 "zcopy": true, 00:09:04.412 "get_zone_info": false, 00:09:04.412 "zone_management": false, 00:09:04.412 "zone_append": false, 00:09:04.412 "compare": false, 00:09:04.412 "compare_and_write": false, 00:09:04.412 "abort": true, 00:09:04.412 "seek_hole": false, 00:09:04.412 "seek_data": false, 00:09:04.412 "copy": true, 00:09:04.412 "nvme_iov_md": false 00:09:04.412 }, 00:09:04.412 "memory_domains": [ 00:09:04.412 { 00:09:04.412 "dma_device_id": "system", 00:09:04.412 "dma_device_type": 1 00:09:04.412 }, 00:09:04.412 { 00:09:04.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.412 "dma_device_type": 2 00:09:04.412 } 00:09:04.412 ], 00:09:04.412 "driver_specific": {} 00:09:04.412 } 00:09:04.412 ]' 00:09:04.412 05:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:04.412 05:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:04.412 05:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:04.412 05:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.412 05:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:04.412 [2024-11-20 05:19:18.899248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:04.412 [2024-11-20 05:19:18.899368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.412 [2024-11-20 05:19:18.899411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x757b40 00:09:04.412 [2024-11-20 05:19:18.899429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.412 [2024-11-20 05:19:18.901344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.412 [2024-11-20 05:19:18.901410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:04.412 Passthru0 00:09:04.412 05:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.412 05:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:04.412 05:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.412 05:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:04.682 05:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.682 05:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:04.682 { 00:09:04.682 "name": "Malloc2", 00:09:04.682 "aliases": [ 00:09:04.682 "c6664b87-df5b-4880-bd4c-965b80ce061e" 00:09:04.682 ], 00:09:04.682 "product_name": "Malloc disk", 00:09:04.682 "block_size": 512, 00:09:04.682 "num_blocks": 16384, 00:09:04.682 "uuid": "c6664b87-df5b-4880-bd4c-965b80ce061e", 00:09:04.682 "assigned_rate_limits": { 00:09:04.682 "rw_ios_per_sec": 0, 00:09:04.682 "rw_mbytes_per_sec": 0, 00:09:04.682 "r_mbytes_per_sec": 0, 00:09:04.682 "w_mbytes_per_sec": 0 00:09:04.682 }, 00:09:04.682 "claimed": true, 00:09:04.682 "claim_type": "exclusive_write", 00:09:04.682 "zoned": false, 00:09:04.682 "supported_io_types": { 00:09:04.682 "read": true, 00:09:04.682 "write": true, 00:09:04.682 "unmap": true, 00:09:04.682 "flush": true, 00:09:04.682 "reset": true, 00:09:04.682 "nvme_admin": false, 00:09:04.682 "nvme_io": false, 00:09:04.682 "nvme_io_md": false, 00:09:04.682 "write_zeroes": true, 00:09:04.682 "zcopy": true, 00:09:04.682 "get_zone_info": false, 00:09:04.682 "zone_management": false, 00:09:04.682 "zone_append": false, 00:09:04.682 "compare": false, 00:09:04.682 "compare_and_write": false, 00:09:04.682 "abort": true, 00:09:04.682 "seek_hole": false, 00:09:04.682 "seek_data": false, 00:09:04.682 "copy": true, 00:09:04.682 "nvme_iov_md": false 00:09:04.682 }, 00:09:04.682 "memory_domains": [ 00:09:04.682 { 00:09:04.682 "dma_device_id": "system", 00:09:04.682 "dma_device_type": 1 00:09:04.682 }, 00:09:04.682 { 00:09:04.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.682 "dma_device_type": 2 00:09:04.682 } 00:09:04.682 ], 00:09:04.682 "driver_specific": {} 00:09:04.682 }, 00:09:04.682 { 00:09:04.682 "name": "Passthru0", 00:09:04.682 "aliases": [ 00:09:04.682 "9b5f3d3f-2513-5355-b61e-53f745914624" 00:09:04.682 ], 00:09:04.682 "product_name": "passthru", 00:09:04.682 "block_size": 512, 00:09:04.682 "num_blocks": 16384, 00:09:04.682 "uuid": "9b5f3d3f-2513-5355-b61e-53f745914624", 00:09:04.682 "assigned_rate_limits": { 00:09:04.682 "rw_ios_per_sec": 0, 00:09:04.682 "rw_mbytes_per_sec": 0, 00:09:04.682 "r_mbytes_per_sec": 0, 00:09:04.682 "w_mbytes_per_sec": 0 00:09:04.682 }, 00:09:04.682 "claimed": false, 00:09:04.682 "zoned": false, 00:09:04.682 "supported_io_types": { 00:09:04.682 "read": true, 00:09:04.682 "write": true, 00:09:04.682 "unmap": true, 00:09:04.682 "flush": true, 00:09:04.682 "reset": true, 00:09:04.682 "nvme_admin": false, 00:09:04.682 "nvme_io": false, 00:09:04.682 "nvme_io_md": false, 00:09:04.682 "write_zeroes": true, 00:09:04.682 "zcopy": true, 00:09:04.682 "get_zone_info": false, 00:09:04.682 "zone_management": false, 00:09:04.682 "zone_append": false, 00:09:04.682 "compare": false, 00:09:04.682 "compare_and_write": false, 00:09:04.682 "abort": true, 00:09:04.682 "seek_hole": false, 00:09:04.682 "seek_data": false, 00:09:04.682 "copy": true, 00:09:04.682 "nvme_iov_md": false 00:09:04.682 }, 00:09:04.682 "memory_domains": [ 00:09:04.682 { 00:09:04.682 "dma_device_id": "system", 00:09:04.682 "dma_device_type": 1 00:09:04.682 }, 00:09:04.682 { 00:09:04.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.682 "dma_device_type": 2 00:09:04.682 } 00:09:04.682 ], 00:09:04.682 "driver_specific": { 00:09:04.682 "passthru": { 00:09:04.682 "name": "Passthru0", 00:09:04.682 "base_bdev_name": "Malloc2" 00:09:04.682 } 00:09:04.682 } 00:09:04.682 } 00:09:04.682 ]' 00:09:04.682 05:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:04.682 05:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:04.682 05:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:04.682 05:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.682 05:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:04.682 05:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.682 05:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:04.682 05:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.682 05:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:04.682 05:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.682 05:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:04.682 05:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.682 05:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:04.682 05:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.682 05:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:04.682 05:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:04.682 05:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:04.682 00:09:04.682 real 0m0.497s 00:09:04.682 user 0m0.239s 00:09:04.682 sys 0m0.073s 00:09:04.682 05:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:04.682 05:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:04.682 ************************************ 00:09:04.683 END TEST rpc_daemon_integrity 00:09:04.683 ************************************ 00:09:04.940 05:19:19 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:04.940 05:19:19 rpc -- rpc/rpc.sh@84 -- # killprocess 57151 00:09:04.940 05:19:19 rpc -- common/autotest_common.sh@952 -- # '[' -z 57151 ']' 00:09:04.940 05:19:19 rpc -- common/autotest_common.sh@956 -- # kill -0 57151 00:09:04.940 05:19:19 rpc -- common/autotest_common.sh@957 -- # uname 00:09:04.940 05:19:19 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:04.940 05:19:19 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57151 00:09:04.940 05:19:19 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:04.940 05:19:19 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:04.940 killing process with pid 57151 00:09:04.940 05:19:19 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57151' 00:09:04.940 05:19:19 rpc -- common/autotest_common.sh@971 -- # kill 57151 00:09:04.940 05:19:19 rpc -- common/autotest_common.sh@976 -- # wait 57151 00:09:05.198 00:09:05.198 real 0m3.454s 00:09:05.198 user 0m4.052s 00:09:05.198 sys 0m0.777s 00:09:05.198 05:19:19 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:05.198 05:19:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.198 ************************************ 00:09:05.198 END TEST rpc 00:09:05.198 ************************************ 00:09:05.198 05:19:19 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:05.198 05:19:19 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:05.198 05:19:19 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:05.198 05:19:19 -- common/autotest_common.sh@10 -- # set +x 00:09:05.198 ************************************ 00:09:05.198 START TEST skip_rpc 00:09:05.198 ************************************ 00:09:05.198 05:19:19 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:05.198 * Looking for test storage... 00:09:05.198 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:05.198 05:19:19 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:05.198 05:19:19 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:09:05.198 05:19:19 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:05.457 05:19:19 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:05.457 05:19:19 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.457 05:19:19 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.457 05:19:19 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.457 05:19:19 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.457 05:19:19 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.457 05:19:19 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.457 05:19:19 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.457 05:19:19 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.457 05:19:19 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.457 05:19:19 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.457 05:19:19 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.457 05:19:19 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:05.457 05:19:19 skip_rpc -- scripts/common.sh@345 -- # : 1 00:09:05.457 05:19:19 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.457 05:19:19 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.457 05:19:19 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:05.457 05:19:19 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:09:05.457 05:19:19 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.457 05:19:19 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:09:05.457 05:19:19 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.457 05:19:19 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:05.457 05:19:19 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:09:05.457 05:19:19 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.457 05:19:19 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:09:05.457 05:19:19 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.457 05:19:19 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.457 05:19:19 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.457 05:19:19 skip_rpc -- scripts/common.sh@368 -- # return 0 00:09:05.457 05:19:19 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.457 05:19:19 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:05.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.457 --rc genhtml_branch_coverage=1 00:09:05.457 --rc genhtml_function_coverage=1 00:09:05.457 --rc genhtml_legend=1 00:09:05.457 --rc geninfo_all_blocks=1 00:09:05.457 --rc geninfo_unexecuted_blocks=1 00:09:05.457 00:09:05.457 ' 00:09:05.457 05:19:19 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:05.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.457 --rc genhtml_branch_coverage=1 00:09:05.457 --rc genhtml_function_coverage=1 00:09:05.457 --rc genhtml_legend=1 00:09:05.457 --rc geninfo_all_blocks=1 00:09:05.457 --rc geninfo_unexecuted_blocks=1 00:09:05.457 00:09:05.457 ' 00:09:05.457 05:19:19 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:05.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.457 --rc genhtml_branch_coverage=1 00:09:05.457 --rc genhtml_function_coverage=1 00:09:05.457 --rc genhtml_legend=1 00:09:05.457 --rc geninfo_all_blocks=1 00:09:05.457 --rc geninfo_unexecuted_blocks=1 00:09:05.457 00:09:05.457 ' 00:09:05.457 05:19:19 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:05.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.457 --rc genhtml_branch_coverage=1 00:09:05.457 --rc genhtml_function_coverage=1 00:09:05.457 --rc genhtml_legend=1 00:09:05.457 --rc geninfo_all_blocks=1 00:09:05.457 --rc geninfo_unexecuted_blocks=1 00:09:05.457 00:09:05.457 ' 00:09:05.457 05:19:19 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:05.457 05:19:19 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:05.457 05:19:19 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:05.457 05:19:19 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:05.457 05:19:19 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:05.457 05:19:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.457 ************************************ 00:09:05.458 START TEST skip_rpc 00:09:05.458 ************************************ 00:09:05.458 05:19:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:09:05.458 05:19:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57355 00:09:05.458 05:19:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:05.458 05:19:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:05.458 05:19:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:05.458 [2024-11-20 05:19:19.913461] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:09:05.458 [2024-11-20 05:19:19.913631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57355 ] 00:09:05.716 [2024-11-20 05:19:20.070101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.716 [2024-11-20 05:19:20.119524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.716 [2024-11-20 05:19:20.170897] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:10.978 05:19:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:10.978 05:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:09:10.978 05:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:10.978 05:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:10.978 05:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:10.978 05:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:10.978 05:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:10.978 05:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:09:10.978 05:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.978 05:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.978 05:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:10.978 05:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:09:10.978 05:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:10.978 05:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:10.978 05:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:10.978 05:19:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:10.978 05:19:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57355 00:09:10.978 05:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 57355 ']' 00:09:10.978 05:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 57355 00:09:10.978 05:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:09:10.978 05:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:10.978 05:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57355 00:09:10.978 05:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:10.978 killing process with pid 57355 00:09:10.978 05:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:10.978 05:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57355' 00:09:10.978 05:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 57355 00:09:10.978 05:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 57355 00:09:10.978 00:09:10.978 real 0m5.363s 00:09:10.978 user 0m5.009s 00:09:10.978 sys 0m0.222s 00:09:10.978 05:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:10.978 05:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.978 ************************************ 00:09:10.978 END TEST skip_rpc 00:09:10.978 ************************************ 00:09:10.978 05:19:25 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:10.978 05:19:25 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:10.978 05:19:25 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:10.978 05:19:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.978 ************************************ 00:09:10.978 START TEST skip_rpc_with_json 00:09:10.978 ************************************ 00:09:10.978 05:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:09:10.978 05:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:10.978 05:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57436 00:09:10.978 05:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:10.978 05:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:10.978 05:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57436 00:09:10.978 05:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 57436 ']' 00:09:10.978 05:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.978 05:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:10.978 05:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.978 05:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:10.978 05:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:10.978 [2024-11-20 05:19:25.316133] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:09:10.978 [2024-11-20 05:19:25.316276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57436 ] 00:09:10.978 [2024-11-20 05:19:25.466208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.264 [2024-11-20 05:19:25.516616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.264 [2024-11-20 05:19:25.568867] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:11.264 05:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:11.264 05:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:09:11.264 05:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:11.264 05:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.264 05:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:11.264 [2024-11-20 05:19:25.725421] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:11.264 request: 00:09:11.264 { 00:09:11.264 "trtype": "tcp", 00:09:11.264 "method": "nvmf_get_transports", 00:09:11.264 "req_id": 1 00:09:11.264 } 00:09:11.264 Got JSON-RPC error response 00:09:11.264 response: 00:09:11.264 { 00:09:11.264 "code": -19, 00:09:11.264 "message": "No such device" 00:09:11.264 } 00:09:11.264 05:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:11.264 05:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:11.264 05:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.264 05:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:11.264 [2024-11-20 05:19:25.733540] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:11.264 05:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.264 05:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:11.264 05:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.264 05:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:11.547 05:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.547 05:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:11.547 { 00:09:11.547 "subsystems": [ 00:09:11.547 { 00:09:11.547 "subsystem": "fsdev", 00:09:11.547 "config": [ 00:09:11.547 { 00:09:11.547 "method": "fsdev_set_opts", 00:09:11.547 "params": { 00:09:11.547 "fsdev_io_pool_size": 65535, 00:09:11.547 "fsdev_io_cache_size": 256 00:09:11.547 } 00:09:11.547 } 00:09:11.547 ] 00:09:11.547 }, 00:09:11.547 { 00:09:11.547 "subsystem": "keyring", 00:09:11.547 "config": [] 00:09:11.547 }, 00:09:11.547 { 00:09:11.547 "subsystem": "iobuf", 00:09:11.547 "config": [ 00:09:11.547 { 00:09:11.547 "method": "iobuf_set_options", 00:09:11.547 "params": { 00:09:11.547 "small_pool_count": 8192, 00:09:11.547 "large_pool_count": 1024, 00:09:11.547 "small_bufsize": 8192, 00:09:11.547 "large_bufsize": 135168, 00:09:11.547 "enable_numa": false 00:09:11.547 } 00:09:11.547 } 00:09:11.547 ] 00:09:11.547 }, 00:09:11.547 { 00:09:11.547 "subsystem": "sock", 00:09:11.547 "config": [ 00:09:11.547 { 00:09:11.547 "method": "sock_set_default_impl", 00:09:11.547 "params": { 00:09:11.547 "impl_name": "uring" 00:09:11.547 } 00:09:11.547 }, 00:09:11.547 { 00:09:11.547 "method": "sock_impl_set_options", 00:09:11.547 "params": { 00:09:11.547 "impl_name": "ssl", 00:09:11.547 "recv_buf_size": 4096, 00:09:11.547 "send_buf_size": 4096, 00:09:11.547 "enable_recv_pipe": true, 00:09:11.547 "enable_quickack": false, 00:09:11.547 "enable_placement_id": 0, 00:09:11.547 "enable_zerocopy_send_server": true, 00:09:11.547 "enable_zerocopy_send_client": false, 00:09:11.547 "zerocopy_threshold": 0, 00:09:11.547 "tls_version": 0, 00:09:11.547 "enable_ktls": false 00:09:11.547 } 00:09:11.547 }, 00:09:11.547 { 00:09:11.547 "method": "sock_impl_set_options", 00:09:11.547 "params": { 00:09:11.547 "impl_name": "posix", 00:09:11.547 "recv_buf_size": 2097152, 00:09:11.547 "send_buf_size": 2097152, 00:09:11.547 "enable_recv_pipe": true, 00:09:11.547 "enable_quickack": false, 00:09:11.547 "enable_placement_id": 0, 00:09:11.547 "enable_zerocopy_send_server": true, 00:09:11.547 "enable_zerocopy_send_client": false, 00:09:11.547 "zerocopy_threshold": 0, 00:09:11.547 "tls_version": 0, 00:09:11.547 "enable_ktls": false 00:09:11.547 } 00:09:11.547 }, 00:09:11.547 { 00:09:11.547 "method": "sock_impl_set_options", 00:09:11.547 "params": { 00:09:11.547 "impl_name": "uring", 00:09:11.547 "recv_buf_size": 2097152, 00:09:11.547 "send_buf_size": 2097152, 00:09:11.547 "enable_recv_pipe": true, 00:09:11.547 "enable_quickack": false, 00:09:11.547 "enable_placement_id": 0, 00:09:11.547 "enable_zerocopy_send_server": false, 00:09:11.547 "enable_zerocopy_send_client": false, 00:09:11.547 "zerocopy_threshold": 0, 00:09:11.547 "tls_version": 0, 00:09:11.547 "enable_ktls": false 00:09:11.547 } 00:09:11.547 } 00:09:11.547 ] 00:09:11.547 }, 00:09:11.547 { 00:09:11.547 "subsystem": "vmd", 00:09:11.547 "config": [] 00:09:11.547 }, 00:09:11.547 { 00:09:11.547 "subsystem": "accel", 00:09:11.547 "config": [ 00:09:11.547 { 00:09:11.547 "method": "accel_set_options", 00:09:11.547 "params": { 00:09:11.547 "small_cache_size": 128, 00:09:11.547 "large_cache_size": 16, 00:09:11.547 "task_count": 2048, 00:09:11.547 "sequence_count": 2048, 00:09:11.547 "buf_count": 2048 00:09:11.547 } 00:09:11.547 } 00:09:11.547 ] 00:09:11.547 }, 00:09:11.547 { 00:09:11.547 "subsystem": "bdev", 00:09:11.547 "config": [ 00:09:11.547 { 00:09:11.547 "method": "bdev_set_options", 00:09:11.547 "params": { 00:09:11.547 "bdev_io_pool_size": 65535, 00:09:11.547 "bdev_io_cache_size": 256, 00:09:11.547 "bdev_auto_examine": true, 00:09:11.547 "iobuf_small_cache_size": 128, 00:09:11.547 "iobuf_large_cache_size": 16 00:09:11.547 } 00:09:11.547 }, 00:09:11.547 { 00:09:11.547 "method": "bdev_raid_set_options", 00:09:11.547 "params": { 00:09:11.547 "process_window_size_kb": 1024, 00:09:11.547 "process_max_bandwidth_mb_sec": 0 00:09:11.547 } 00:09:11.547 }, 00:09:11.547 { 00:09:11.547 "method": "bdev_iscsi_set_options", 00:09:11.547 "params": { 00:09:11.547 "timeout_sec": 30 00:09:11.547 } 00:09:11.547 }, 00:09:11.547 { 00:09:11.548 "method": "bdev_nvme_set_options", 00:09:11.548 "params": { 00:09:11.548 "action_on_timeout": "none", 00:09:11.548 "timeout_us": 0, 00:09:11.548 "timeout_admin_us": 0, 00:09:11.548 "keep_alive_timeout_ms": 10000, 00:09:11.548 "arbitration_burst": 0, 00:09:11.548 "low_priority_weight": 0, 00:09:11.548 "medium_priority_weight": 0, 00:09:11.548 "high_priority_weight": 0, 00:09:11.548 "nvme_adminq_poll_period_us": 10000, 00:09:11.548 "nvme_ioq_poll_period_us": 0, 00:09:11.548 "io_queue_requests": 0, 00:09:11.548 "delay_cmd_submit": true, 00:09:11.548 "transport_retry_count": 4, 00:09:11.548 "bdev_retry_count": 3, 00:09:11.548 "transport_ack_timeout": 0, 00:09:11.548 "ctrlr_loss_timeout_sec": 0, 00:09:11.548 "reconnect_delay_sec": 0, 00:09:11.548 "fast_io_fail_timeout_sec": 0, 00:09:11.548 "disable_auto_failback": false, 00:09:11.548 "generate_uuids": false, 00:09:11.548 "transport_tos": 0, 00:09:11.548 "nvme_error_stat": false, 00:09:11.548 "rdma_srq_size": 0, 00:09:11.548 "io_path_stat": false, 00:09:11.548 "allow_accel_sequence": false, 00:09:11.548 "rdma_max_cq_size": 0, 00:09:11.548 "rdma_cm_event_timeout_ms": 0, 00:09:11.548 "dhchap_digests": [ 00:09:11.548 "sha256", 00:09:11.548 "sha384", 00:09:11.548 "sha512" 00:09:11.548 ], 00:09:11.548 "dhchap_dhgroups": [ 00:09:11.548 "null", 00:09:11.548 "ffdhe2048", 00:09:11.548 "ffdhe3072", 00:09:11.548 "ffdhe4096", 00:09:11.548 "ffdhe6144", 00:09:11.548 "ffdhe8192" 00:09:11.548 ] 00:09:11.548 } 00:09:11.548 }, 00:09:11.548 { 00:09:11.548 "method": "bdev_nvme_set_hotplug", 00:09:11.548 "params": { 00:09:11.548 "period_us": 100000, 00:09:11.548 "enable": false 00:09:11.548 } 00:09:11.548 }, 00:09:11.548 { 00:09:11.548 "method": "bdev_wait_for_examine" 00:09:11.548 } 00:09:11.548 ] 00:09:11.548 }, 00:09:11.548 { 00:09:11.548 "subsystem": "scsi", 00:09:11.548 "config": null 00:09:11.548 }, 00:09:11.548 { 00:09:11.548 "subsystem": "scheduler", 00:09:11.548 "config": [ 00:09:11.548 { 00:09:11.548 "method": "framework_set_scheduler", 00:09:11.548 "params": { 00:09:11.548 "name": "static" 00:09:11.548 } 00:09:11.548 } 00:09:11.548 ] 00:09:11.548 }, 00:09:11.548 { 00:09:11.548 "subsystem": "vhost_scsi", 00:09:11.548 "config": [] 00:09:11.548 }, 00:09:11.548 { 00:09:11.548 "subsystem": "vhost_blk", 00:09:11.548 "config": [] 00:09:11.548 }, 00:09:11.548 { 00:09:11.548 "subsystem": "ublk", 00:09:11.548 "config": [] 00:09:11.548 }, 00:09:11.548 { 00:09:11.548 "subsystem": "nbd", 00:09:11.548 "config": [] 00:09:11.548 }, 00:09:11.548 { 00:09:11.548 "subsystem": "nvmf", 00:09:11.548 "config": [ 00:09:11.548 { 00:09:11.548 "method": "nvmf_set_config", 00:09:11.548 "params": { 00:09:11.548 "discovery_filter": "match_any", 00:09:11.548 "admin_cmd_passthru": { 00:09:11.548 "identify_ctrlr": false 00:09:11.548 }, 00:09:11.548 "dhchap_digests": [ 00:09:11.548 "sha256", 00:09:11.548 "sha384", 00:09:11.548 "sha512" 00:09:11.548 ], 00:09:11.548 "dhchap_dhgroups": [ 00:09:11.548 "null", 00:09:11.548 "ffdhe2048", 00:09:11.548 "ffdhe3072", 00:09:11.548 "ffdhe4096", 00:09:11.548 "ffdhe6144", 00:09:11.548 "ffdhe8192" 00:09:11.548 ] 00:09:11.548 } 00:09:11.548 }, 00:09:11.548 { 00:09:11.548 "method": "nvmf_set_max_subsystems", 00:09:11.548 "params": { 00:09:11.548 "max_subsystems": 1024 00:09:11.548 } 00:09:11.548 }, 00:09:11.548 { 00:09:11.548 "method": "nvmf_set_crdt", 00:09:11.548 "params": { 00:09:11.548 "crdt1": 0, 00:09:11.548 "crdt2": 0, 00:09:11.548 "crdt3": 0 00:09:11.548 } 00:09:11.548 }, 00:09:11.548 { 00:09:11.548 "method": "nvmf_create_transport", 00:09:11.548 "params": { 00:09:11.548 "trtype": "TCP", 00:09:11.548 "max_queue_depth": 128, 00:09:11.548 "max_io_qpairs_per_ctrlr": 127, 00:09:11.548 "in_capsule_data_size": 4096, 00:09:11.548 "max_io_size": 131072, 00:09:11.548 "io_unit_size": 131072, 00:09:11.548 "max_aq_depth": 128, 00:09:11.548 "num_shared_buffers": 511, 00:09:11.548 "buf_cache_size": 4294967295, 00:09:11.548 "dif_insert_or_strip": false, 00:09:11.548 "zcopy": false, 00:09:11.548 "c2h_success": true, 00:09:11.548 "sock_priority": 0, 00:09:11.548 "abort_timeout_sec": 1, 00:09:11.548 "ack_timeout": 0, 00:09:11.548 "data_wr_pool_size": 0 00:09:11.548 } 00:09:11.548 } 00:09:11.548 ] 00:09:11.548 }, 00:09:11.548 { 00:09:11.548 "subsystem": "iscsi", 00:09:11.548 "config": [ 00:09:11.548 { 00:09:11.548 "method": "iscsi_set_options", 00:09:11.548 "params": { 00:09:11.548 "node_base": "iqn.2016-06.io.spdk", 00:09:11.548 "max_sessions": 128, 00:09:11.548 "max_connections_per_session": 2, 00:09:11.548 "max_queue_depth": 64, 00:09:11.548 "default_time2wait": 2, 00:09:11.548 "default_time2retain": 20, 00:09:11.548 "first_burst_length": 8192, 00:09:11.548 "immediate_data": true, 00:09:11.548 "allow_duplicated_isid": false, 00:09:11.548 "error_recovery_level": 0, 00:09:11.548 "nop_timeout": 60, 00:09:11.548 "nop_in_interval": 30, 00:09:11.548 "disable_chap": false, 00:09:11.548 "require_chap": false, 00:09:11.548 "mutual_chap": false, 00:09:11.548 "chap_group": 0, 00:09:11.548 "max_large_datain_per_connection": 64, 00:09:11.548 "max_r2t_per_connection": 4, 00:09:11.548 "pdu_pool_size": 36864, 00:09:11.548 "immediate_data_pool_size": 16384, 00:09:11.548 "data_out_pool_size": 2048 00:09:11.548 } 00:09:11.548 } 00:09:11.548 ] 00:09:11.548 } 00:09:11.548 ] 00:09:11.548 } 00:09:11.548 05:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:11.548 05:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57436 00:09:11.548 05:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57436 ']' 00:09:11.548 05:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57436 00:09:11.548 05:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:09:11.548 05:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:11.548 05:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57436 00:09:11.548 05:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:11.548 killing process with pid 57436 00:09:11.548 05:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:11.548 05:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57436' 00:09:11.548 05:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57436 00:09:11.548 05:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57436 00:09:11.806 05:19:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57456 00:09:11.806 05:19:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:11.806 05:19:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:17.068 05:19:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57456 00:09:17.068 05:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57456 ']' 00:09:17.068 05:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57456 00:09:17.068 05:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:09:17.068 05:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:17.068 05:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57456 00:09:17.068 killing process with pid 57456 00:09:17.068 05:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:17.068 05:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:17.068 05:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57456' 00:09:17.068 05:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57456 00:09:17.068 05:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57456 00:09:17.068 05:19:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:17.068 05:19:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:17.068 00:09:17.068 real 0m6.277s 00:09:17.068 user 0m6.040s 00:09:17.068 sys 0m0.474s 00:09:17.068 05:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:17.068 05:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:17.068 ************************************ 00:09:17.068 END TEST skip_rpc_with_json 00:09:17.068 ************************************ 00:09:17.068 05:19:31 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:17.068 05:19:31 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:17.068 05:19:31 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:17.068 05:19:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.068 ************************************ 00:09:17.068 START TEST skip_rpc_with_delay 00:09:17.068 ************************************ 00:09:17.068 05:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:09:17.068 05:19:31 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:17.068 05:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:09:17.068 05:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:17.068 05:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:17.068 05:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:17.068 05:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:17.068 05:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:17.068 05:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:17.068 05:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:17.068 05:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:17.068 05:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:17.068 05:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:17.326 [2024-11-20 05:19:31.619669] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:17.326 05:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:09:17.326 05:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:17.326 05:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:17.326 05:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:17.326 00:09:17.326 real 0m0.085s 00:09:17.326 user 0m0.053s 00:09:17.326 sys 0m0.029s 00:09:17.326 05:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:17.326 05:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:17.326 ************************************ 00:09:17.326 END TEST skip_rpc_with_delay 00:09:17.326 ************************************ 00:09:17.326 05:19:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:17.326 05:19:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:17.326 05:19:31 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:17.326 05:19:31 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:17.326 05:19:31 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:17.326 05:19:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.326 ************************************ 00:09:17.326 START TEST exit_on_failed_rpc_init 00:09:17.326 ************************************ 00:09:17.326 05:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:09:17.326 05:19:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57571 00:09:17.326 05:19:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:17.326 05:19:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57571 00:09:17.326 05:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57571 ']' 00:09:17.326 05:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.326 05:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:17.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.326 05:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.326 05:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:17.326 05:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:17.326 [2024-11-20 05:19:31.767542] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:09:17.326 [2024-11-20 05:19:31.767684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57571 ] 00:09:17.585 [2024-11-20 05:19:31.920288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.585 [2024-11-20 05:19:31.954207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.585 [2024-11-20 05:19:31.995167] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:17.843 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:17.844 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:09:17.844 05:19:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:17.844 05:19:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:17.844 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:09:17.844 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:17.844 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:17.844 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:17.844 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:17.844 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:17.844 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:17.844 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:17.844 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:17.844 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:17.844 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:17.844 [2024-11-20 05:19:32.216375] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:09:17.844 [2024-11-20 05:19:32.216512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57576 ] 00:09:18.102 [2024-11-20 05:19:32.364305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.102 [2024-11-20 05:19:32.413554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.102 [2024-11-20 05:19:32.413690] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:18.103 [2024-11-20 05:19:32.413713] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:18.103 [2024-11-20 05:19:32.413727] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:18.103 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:09:18.103 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:18.103 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:09:18.103 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:09:18.103 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:09:18.103 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:18.103 05:19:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:18.103 05:19:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57571 00:09:18.103 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57571 ']' 00:09:18.103 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57571 00:09:18.103 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:09:18.103 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:18.103 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57571 00:09:18.103 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:18.103 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:18.103 killing process with pid 57571 00:09:18.103 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57571' 00:09:18.103 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57571 00:09:18.103 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57571 00:09:18.363 00:09:18.363 real 0m1.085s 00:09:18.363 user 0m1.300s 00:09:18.363 sys 0m0.309s 00:09:18.363 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:18.363 05:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:18.363 ************************************ 00:09:18.363 END TEST exit_on_failed_rpc_init 00:09:18.363 ************************************ 00:09:18.363 05:19:32 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:18.363 00:09:18.363 real 0m13.195s 00:09:18.363 user 0m12.601s 00:09:18.363 sys 0m1.206s 00:09:18.363 05:19:32 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:18.363 05:19:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.363 ************************************ 00:09:18.363 END TEST skip_rpc 00:09:18.363 ************************************ 00:09:18.363 05:19:32 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:18.363 05:19:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:18.363 05:19:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:18.363 05:19:32 -- common/autotest_common.sh@10 -- # set +x 00:09:18.363 ************************************ 00:09:18.363 START TEST rpc_client 00:09:18.363 ************************************ 00:09:18.363 05:19:32 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:18.622 * Looking for test storage... 00:09:18.622 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:18.622 05:19:32 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:18.622 05:19:32 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:09:18.622 05:19:32 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:18.622 05:19:33 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:18.622 05:19:33 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:18.622 05:19:33 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:18.622 05:19:33 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:18.622 05:19:33 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:09:18.622 05:19:33 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:09:18.622 05:19:33 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:09:18.622 05:19:33 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:09:18.622 05:19:33 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:09:18.622 05:19:33 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:09:18.622 05:19:33 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:09:18.622 05:19:33 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:18.622 05:19:33 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:09:18.622 05:19:33 rpc_client -- scripts/common.sh@345 -- # : 1 00:09:18.622 05:19:33 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:18.622 05:19:33 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:18.622 05:19:33 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:09:18.622 05:19:33 rpc_client -- scripts/common.sh@353 -- # local d=1 00:09:18.622 05:19:33 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:18.622 05:19:33 rpc_client -- scripts/common.sh@355 -- # echo 1 00:09:18.622 05:19:33 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:09:18.622 05:19:33 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:09:18.622 05:19:33 rpc_client -- scripts/common.sh@353 -- # local d=2 00:09:18.622 05:19:33 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:18.622 05:19:33 rpc_client -- scripts/common.sh@355 -- # echo 2 00:09:18.622 05:19:33 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:09:18.622 05:19:33 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:18.622 05:19:33 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:18.622 05:19:33 rpc_client -- scripts/common.sh@368 -- # return 0 00:09:18.622 05:19:33 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:18.622 05:19:33 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:18.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.622 --rc genhtml_branch_coverage=1 00:09:18.623 --rc genhtml_function_coverage=1 00:09:18.623 --rc genhtml_legend=1 00:09:18.623 --rc geninfo_all_blocks=1 00:09:18.623 --rc geninfo_unexecuted_blocks=1 00:09:18.623 00:09:18.623 ' 00:09:18.623 05:19:33 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:18.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.623 --rc genhtml_branch_coverage=1 00:09:18.623 --rc genhtml_function_coverage=1 00:09:18.623 --rc genhtml_legend=1 00:09:18.623 --rc geninfo_all_blocks=1 00:09:18.623 --rc geninfo_unexecuted_blocks=1 00:09:18.623 00:09:18.623 ' 00:09:18.623 05:19:33 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:18.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.623 --rc genhtml_branch_coverage=1 00:09:18.623 --rc genhtml_function_coverage=1 00:09:18.623 --rc genhtml_legend=1 00:09:18.623 --rc geninfo_all_blocks=1 00:09:18.623 --rc geninfo_unexecuted_blocks=1 00:09:18.623 00:09:18.623 ' 00:09:18.623 05:19:33 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:18.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.623 --rc genhtml_branch_coverage=1 00:09:18.623 --rc genhtml_function_coverage=1 00:09:18.623 --rc genhtml_legend=1 00:09:18.623 --rc geninfo_all_blocks=1 00:09:18.623 --rc geninfo_unexecuted_blocks=1 00:09:18.623 00:09:18.623 ' 00:09:18.623 05:19:33 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:18.623 OK 00:09:18.623 05:19:33 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:18.623 00:09:18.623 real 0m0.194s 00:09:18.623 user 0m0.128s 00:09:18.623 sys 0m0.076s 00:09:18.623 05:19:33 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:18.623 05:19:33 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:18.623 ************************************ 00:09:18.623 END TEST rpc_client 00:09:18.623 ************************************ 00:09:18.623 05:19:33 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:18.623 05:19:33 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:18.623 05:19:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:18.623 05:19:33 -- common/autotest_common.sh@10 -- # set +x 00:09:18.623 ************************************ 00:09:18.623 START TEST json_config 00:09:18.623 ************************************ 00:09:18.623 05:19:33 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:18.883 05:19:33 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:18.883 05:19:33 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:09:18.883 05:19:33 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:18.883 05:19:33 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:18.883 05:19:33 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:18.883 05:19:33 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:18.883 05:19:33 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:18.883 05:19:33 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:09:18.883 05:19:33 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:09:18.883 05:19:33 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:09:18.883 05:19:33 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:09:18.883 05:19:33 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:09:18.883 05:19:33 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:09:18.883 05:19:33 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:09:18.883 05:19:33 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:18.883 05:19:33 json_config -- scripts/common.sh@344 -- # case "$op" in 00:09:18.883 05:19:33 json_config -- scripts/common.sh@345 -- # : 1 00:09:18.883 05:19:33 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:18.883 05:19:33 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:18.883 05:19:33 json_config -- scripts/common.sh@365 -- # decimal 1 00:09:18.883 05:19:33 json_config -- scripts/common.sh@353 -- # local d=1 00:09:18.883 05:19:33 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:18.883 05:19:33 json_config -- scripts/common.sh@355 -- # echo 1 00:09:18.883 05:19:33 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:09:18.883 05:19:33 json_config -- scripts/common.sh@366 -- # decimal 2 00:09:18.883 05:19:33 json_config -- scripts/common.sh@353 -- # local d=2 00:09:18.883 05:19:33 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:18.883 05:19:33 json_config -- scripts/common.sh@355 -- # echo 2 00:09:18.883 05:19:33 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:09:18.883 05:19:33 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:18.883 05:19:33 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:18.883 05:19:33 json_config -- scripts/common.sh@368 -- # return 0 00:09:18.883 05:19:33 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:18.883 05:19:33 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:18.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.883 --rc genhtml_branch_coverage=1 00:09:18.883 --rc genhtml_function_coverage=1 00:09:18.883 --rc genhtml_legend=1 00:09:18.883 --rc geninfo_all_blocks=1 00:09:18.883 --rc geninfo_unexecuted_blocks=1 00:09:18.883 00:09:18.883 ' 00:09:18.883 05:19:33 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:18.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.883 --rc genhtml_branch_coverage=1 00:09:18.883 --rc genhtml_function_coverage=1 00:09:18.883 --rc genhtml_legend=1 00:09:18.883 --rc geninfo_all_blocks=1 00:09:18.883 --rc geninfo_unexecuted_blocks=1 00:09:18.883 00:09:18.883 ' 00:09:18.883 05:19:33 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:18.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.883 --rc genhtml_branch_coverage=1 00:09:18.884 --rc genhtml_function_coverage=1 00:09:18.884 --rc genhtml_legend=1 00:09:18.884 --rc geninfo_all_blocks=1 00:09:18.884 --rc geninfo_unexecuted_blocks=1 00:09:18.884 00:09:18.884 ' 00:09:18.884 05:19:33 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:18.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.884 --rc genhtml_branch_coverage=1 00:09:18.884 --rc genhtml_function_coverage=1 00:09:18.884 --rc genhtml_legend=1 00:09:18.884 --rc geninfo_all_blocks=1 00:09:18.884 --rc geninfo_unexecuted_blocks=1 00:09:18.884 00:09:18.884 ' 00:09:18.884 05:19:33 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:18.884 05:19:33 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:18.884 05:19:33 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:18.884 05:19:33 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:18.884 05:19:33 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:18.884 05:19:33 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:18.884 05:19:33 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:18.884 05:19:33 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:18.884 05:19:33 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:18.884 05:19:33 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:18.884 05:19:33 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:18.884 05:19:33 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:18.884 05:19:33 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:09:18.884 05:19:33 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:09:18.884 05:19:33 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:18.884 05:19:33 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:18.884 05:19:33 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:18.884 05:19:33 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:18.884 05:19:33 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:18.884 05:19:33 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:09:18.884 05:19:33 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.884 05:19:33 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.884 05:19:33 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.884 05:19:33 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.884 05:19:33 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.884 05:19:33 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.884 05:19:33 json_config -- paths/export.sh@5 -- # export PATH 00:09:18.884 05:19:33 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.884 05:19:33 json_config -- nvmf/common.sh@51 -- # : 0 00:09:18.884 05:19:33 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:18.884 05:19:33 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:18.884 05:19:33 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:18.884 05:19:33 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:18.884 05:19:33 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:18.884 05:19:33 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:18.884 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:18.884 05:19:33 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:18.884 05:19:33 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:18.884 05:19:33 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:18.884 05:19:33 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:18.884 05:19:33 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:18.884 05:19:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:18.884 05:19:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:18.884 05:19:33 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:18.884 05:19:33 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:09:18.884 05:19:33 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:09:18.884 05:19:33 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:09:18.884 05:19:33 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:09:18.884 05:19:33 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:09:18.884 05:19:33 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:09:18.884 05:19:33 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:09:18.884 05:19:33 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:09:18.884 05:19:33 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:09:18.884 05:19:33 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:18.884 INFO: JSON configuration test init 00:09:18.884 05:19:33 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:09:18.884 05:19:33 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:09:18.884 05:19:33 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:09:18.884 05:19:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:18.884 05:19:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:18.884 05:19:33 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:09:18.884 05:19:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:18.884 05:19:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:18.884 05:19:33 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:09:18.884 05:19:33 json_config -- json_config/common.sh@9 -- # local app=target 00:09:18.884 05:19:33 json_config -- json_config/common.sh@10 -- # shift 00:09:18.884 05:19:33 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:18.884 05:19:33 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:18.884 05:19:33 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:18.884 05:19:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:18.884 05:19:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:18.884 05:19:33 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57710 00:09:18.884 05:19:33 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:18.884 Waiting for target to run... 00:09:18.884 05:19:33 json_config -- json_config/common.sh@25 -- # waitforlisten 57710 /var/tmp/spdk_tgt.sock 00:09:18.884 05:19:33 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:18.884 05:19:33 json_config -- common/autotest_common.sh@833 -- # '[' -z 57710 ']' 00:09:18.884 05:19:33 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:18.884 05:19:33 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:18.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:18.884 05:19:33 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:18.884 05:19:33 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:18.884 05:19:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:18.884 [2024-11-20 05:19:33.341111] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:09:18.884 [2024-11-20 05:19:33.341218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57710 ] 00:09:19.144 [2024-11-20 05:19:33.648501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.402 [2024-11-20 05:19:33.682493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.969 05:19:34 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:19.969 05:19:34 json_config -- common/autotest_common.sh@866 -- # return 0 00:09:19.969 05:19:34 json_config -- json_config/common.sh@26 -- # echo '' 00:09:19.969 00:09:19.969 05:19:34 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:09:19.969 05:19:34 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:09:19.969 05:19:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:19.969 05:19:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:19.969 05:19:34 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:09:19.969 05:19:34 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:09:19.969 05:19:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:19.969 05:19:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:19.969 05:19:34 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:19.969 05:19:34 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:09:19.969 05:19:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:20.534 [2024-11-20 05:19:34.786119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:20.534 05:19:34 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:09:20.534 05:19:34 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:09:20.534 05:19:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:20.534 05:19:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:20.534 05:19:34 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:09:20.535 05:19:34 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:09:20.535 05:19:34 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:09:20.535 05:19:34 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:09:20.535 05:19:34 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:09:20.535 05:19:34 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:09:20.535 05:19:34 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:09:20.535 05:19:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:20.793 05:19:35 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:09:20.793 05:19:35 json_config -- json_config/json_config.sh@51 -- # local get_types 00:09:20.793 05:19:35 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:09:21.051 05:19:35 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:09:21.051 05:19:35 json_config -- json_config/json_config.sh@54 -- # sort 00:09:21.051 05:19:35 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:09:21.052 05:19:35 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:09:21.052 05:19:35 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:09:21.052 05:19:35 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:09:21.052 05:19:35 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:09:21.052 05:19:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:21.052 05:19:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:21.052 05:19:35 json_config -- json_config/json_config.sh@62 -- # return 0 00:09:21.052 05:19:35 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:09:21.052 05:19:35 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:09:21.052 05:19:35 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:09:21.052 05:19:35 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:09:21.052 05:19:35 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:09:21.052 05:19:35 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:09:21.052 05:19:35 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:21.052 05:19:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:21.052 05:19:35 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:09:21.052 05:19:35 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:09:21.052 05:19:35 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:09:21.052 05:19:35 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:21.052 05:19:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:21.329 MallocForNvmf0 00:09:21.330 05:19:35 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:21.330 05:19:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:21.656 MallocForNvmf1 00:09:21.656 05:19:35 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:09:21.656 05:19:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:09:21.915 [2024-11-20 05:19:36.189315] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.915 05:19:36 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:21.915 05:19:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:22.172 05:19:36 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:22.172 05:19:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:22.430 05:19:36 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:22.430 05:19:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:22.686 05:19:37 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:22.686 05:19:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:22.944 [2024-11-20 05:19:37.394036] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:22.944 05:19:37 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:09:22.944 05:19:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:22.944 05:19:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:22.944 05:19:37 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:09:22.944 05:19:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:22.944 05:19:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:23.202 05:19:37 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:09:23.202 05:19:37 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:23.202 05:19:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:23.460 MallocBdevForConfigChangeCheck 00:09:23.460 05:19:37 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:09:23.460 05:19:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:23.460 05:19:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:23.460 05:19:37 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:09:23.460 05:19:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:24.028 INFO: shutting down applications... 00:09:24.028 05:19:38 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:09:24.028 05:19:38 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:09:24.028 05:19:38 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:09:24.028 05:19:38 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:09:24.028 05:19:38 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:09:24.286 Calling clear_iscsi_subsystem 00:09:24.286 Calling clear_nvmf_subsystem 00:09:24.286 Calling clear_nbd_subsystem 00:09:24.286 Calling clear_ublk_subsystem 00:09:24.286 Calling clear_vhost_blk_subsystem 00:09:24.286 Calling clear_vhost_scsi_subsystem 00:09:24.286 Calling clear_bdev_subsystem 00:09:24.286 05:19:38 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:09:24.286 05:19:38 json_config -- json_config/json_config.sh@350 -- # count=100 00:09:24.286 05:19:38 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:09:24.286 05:19:38 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:24.286 05:19:38 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:09:24.286 05:19:38 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:09:24.863 05:19:39 json_config -- json_config/json_config.sh@352 -- # break 00:09:24.863 05:19:39 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:09:24.863 05:19:39 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:09:24.863 05:19:39 json_config -- json_config/common.sh@31 -- # local app=target 00:09:24.863 05:19:39 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:24.863 05:19:39 json_config -- json_config/common.sh@35 -- # [[ -n 57710 ]] 00:09:24.863 05:19:39 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57710 00:09:24.863 05:19:39 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:24.863 05:19:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:24.863 05:19:39 json_config -- json_config/common.sh@41 -- # kill -0 57710 00:09:24.863 05:19:39 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:09:25.121 05:19:39 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:09:25.121 05:19:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:25.121 05:19:39 json_config -- json_config/common.sh@41 -- # kill -0 57710 00:09:25.122 SPDK target shutdown done 00:09:25.122 INFO: relaunching applications... 00:09:25.122 05:19:39 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:25.122 05:19:39 json_config -- json_config/common.sh@43 -- # break 00:09:25.122 05:19:39 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:25.122 05:19:39 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:25.122 05:19:39 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:09:25.122 05:19:39 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:25.122 05:19:39 json_config -- json_config/common.sh@9 -- # local app=target 00:09:25.122 05:19:39 json_config -- json_config/common.sh@10 -- # shift 00:09:25.122 05:19:39 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:25.122 05:19:39 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:25.122 05:19:39 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:25.122 05:19:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:25.122 05:19:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:25.122 05:19:39 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57911 00:09:25.122 05:19:39 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:25.122 05:19:39 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:25.122 Waiting for target to run... 00:09:25.122 05:19:39 json_config -- json_config/common.sh@25 -- # waitforlisten 57911 /var/tmp/spdk_tgt.sock 00:09:25.122 05:19:39 json_config -- common/autotest_common.sh@833 -- # '[' -z 57911 ']' 00:09:25.122 05:19:39 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:25.122 05:19:39 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:25.122 05:19:39 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:25.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:25.122 05:19:39 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:25.122 05:19:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:25.379 [2024-11-20 05:19:39.656071] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:09:25.379 [2024-11-20 05:19:39.656456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57911 ] 00:09:25.637 [2024-11-20 05:19:39.975112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.637 [2024-11-20 05:19:40.008611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.637 [2024-11-20 05:19:40.140817] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:25.896 [2024-11-20 05:19:40.343349] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:25.896 [2024-11-20 05:19:40.375455] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:26.154 00:09:26.154 INFO: Checking if target configuration is the same... 00:09:26.154 05:19:40 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:26.154 05:19:40 json_config -- common/autotest_common.sh@866 -- # return 0 00:09:26.154 05:19:40 json_config -- json_config/common.sh@26 -- # echo '' 00:09:26.155 05:19:40 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:09:26.155 05:19:40 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:09:26.155 05:19:40 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:26.155 05:19:40 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:09:26.155 05:19:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:26.413 + '[' 2 -ne 2 ']' 00:09:26.413 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:26.413 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:26.413 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:26.413 +++ basename /dev/fd/62 00:09:26.413 ++ mktemp /tmp/62.XXX 00:09:26.413 + tmp_file_1=/tmp/62.l3K 00:09:26.413 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:26.413 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:26.413 + tmp_file_2=/tmp/spdk_tgt_config.json.C6f 00:09:26.413 + ret=0 00:09:26.413 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:26.671 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:26.928 + diff -u /tmp/62.l3K /tmp/spdk_tgt_config.json.C6f 00:09:26.928 INFO: JSON config files are the same 00:09:26.928 + echo 'INFO: JSON config files are the same' 00:09:26.928 + rm /tmp/62.l3K /tmp/spdk_tgt_config.json.C6f 00:09:26.928 + exit 0 00:09:26.928 INFO: changing configuration and checking if this can be detected... 00:09:26.928 05:19:41 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:09:26.928 05:19:41 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:09:26.929 05:19:41 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:26.929 05:19:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:27.187 05:19:41 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:27.187 05:19:41 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:09:27.187 05:19:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:27.187 + '[' 2 -ne 2 ']' 00:09:27.187 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:27.187 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:27.187 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:27.187 +++ basename /dev/fd/62 00:09:27.187 ++ mktemp /tmp/62.XXX 00:09:27.187 + tmp_file_1=/tmp/62.0RE 00:09:27.187 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:27.187 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:27.187 + tmp_file_2=/tmp/spdk_tgt_config.json.RFA 00:09:27.187 + ret=0 00:09:27.187 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:27.753 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:27.753 + diff -u /tmp/62.0RE /tmp/spdk_tgt_config.json.RFA 00:09:27.753 + ret=1 00:09:27.753 + echo '=== Start of file: /tmp/62.0RE ===' 00:09:27.753 + cat /tmp/62.0RE 00:09:27.753 + echo '=== End of file: /tmp/62.0RE ===' 00:09:27.753 + echo '' 00:09:27.753 + echo '=== Start of file: /tmp/spdk_tgt_config.json.RFA ===' 00:09:27.753 + cat /tmp/spdk_tgt_config.json.RFA 00:09:27.753 + echo '=== End of file: /tmp/spdk_tgt_config.json.RFA ===' 00:09:27.753 + echo '' 00:09:27.753 + rm /tmp/62.0RE /tmp/spdk_tgt_config.json.RFA 00:09:27.753 + exit 1 00:09:27.753 INFO: configuration change detected. 00:09:27.753 05:19:42 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:09:27.753 05:19:42 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:09:27.753 05:19:42 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:09:27.753 05:19:42 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:27.753 05:19:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:27.753 05:19:42 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:09:27.753 05:19:42 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:09:27.753 05:19:42 json_config -- json_config/json_config.sh@324 -- # [[ -n 57911 ]] 00:09:27.753 05:19:42 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:09:27.754 05:19:42 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:09:27.754 05:19:42 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:27.754 05:19:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:27.754 05:19:42 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:09:27.754 05:19:42 json_config -- json_config/json_config.sh@200 -- # uname -s 00:09:27.754 05:19:42 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:09:27.754 05:19:42 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:09:27.754 05:19:42 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:09:27.754 05:19:42 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:09:27.754 05:19:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:27.754 05:19:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:27.754 05:19:42 json_config -- json_config/json_config.sh@330 -- # killprocess 57911 00:09:27.754 05:19:42 json_config -- common/autotest_common.sh@952 -- # '[' -z 57911 ']' 00:09:27.754 05:19:42 json_config -- common/autotest_common.sh@956 -- # kill -0 57911 00:09:27.754 05:19:42 json_config -- common/autotest_common.sh@957 -- # uname 00:09:27.754 05:19:42 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:27.754 05:19:42 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57911 00:09:28.012 killing process with pid 57911 00:09:28.012 05:19:42 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:28.012 05:19:42 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:28.012 05:19:42 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57911' 00:09:28.012 05:19:42 json_config -- common/autotest_common.sh@971 -- # kill 57911 00:09:28.012 05:19:42 json_config -- common/autotest_common.sh@976 -- # wait 57911 00:09:28.012 05:19:42 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:28.012 05:19:42 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:09:28.012 05:19:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:28.012 05:19:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:28.012 INFO: Success 00:09:28.012 05:19:42 json_config -- json_config/json_config.sh@335 -- # return 0 00:09:28.012 05:19:42 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:09:28.012 00:09:28.012 real 0m9.370s 00:09:28.012 user 0m14.055s 00:09:28.012 sys 0m1.572s 00:09:28.012 ************************************ 00:09:28.012 END TEST json_config 00:09:28.012 ************************************ 00:09:28.012 05:19:42 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:28.012 05:19:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:28.012 05:19:42 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:28.012 05:19:42 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:28.012 05:19:42 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:28.012 05:19:42 -- common/autotest_common.sh@10 -- # set +x 00:09:28.012 ************************************ 00:09:28.012 START TEST json_config_extra_key 00:09:28.012 ************************************ 00:09:28.012 05:19:42 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:28.270 05:19:42 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:28.270 05:19:42 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:09:28.270 05:19:42 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:28.270 05:19:42 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:28.270 05:19:42 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:09:28.271 05:19:42 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.271 05:19:42 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:28.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.271 --rc genhtml_branch_coverage=1 00:09:28.271 --rc genhtml_function_coverage=1 00:09:28.271 --rc genhtml_legend=1 00:09:28.271 --rc geninfo_all_blocks=1 00:09:28.271 --rc geninfo_unexecuted_blocks=1 00:09:28.271 00:09:28.271 ' 00:09:28.271 05:19:42 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:28.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.271 --rc genhtml_branch_coverage=1 00:09:28.271 --rc genhtml_function_coverage=1 00:09:28.271 --rc genhtml_legend=1 00:09:28.271 --rc geninfo_all_blocks=1 00:09:28.271 --rc geninfo_unexecuted_blocks=1 00:09:28.271 00:09:28.271 ' 00:09:28.271 05:19:42 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:28.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.271 --rc genhtml_branch_coverage=1 00:09:28.271 --rc genhtml_function_coverage=1 00:09:28.271 --rc genhtml_legend=1 00:09:28.271 --rc geninfo_all_blocks=1 00:09:28.271 --rc geninfo_unexecuted_blocks=1 00:09:28.271 00:09:28.271 ' 00:09:28.271 05:19:42 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:28.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.271 --rc genhtml_branch_coverage=1 00:09:28.271 --rc genhtml_function_coverage=1 00:09:28.271 --rc genhtml_legend=1 00:09:28.271 --rc geninfo_all_blocks=1 00:09:28.271 --rc geninfo_unexecuted_blocks=1 00:09:28.271 00:09:28.271 ' 00:09:28.271 05:19:42 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:28.271 05:19:42 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:28.271 05:19:42 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.271 05:19:42 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.271 05:19:42 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.271 05:19:42 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.271 05:19:42 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.271 05:19:42 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.271 05:19:42 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.271 05:19:42 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.271 05:19:42 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.271 05:19:42 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.271 05:19:42 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:09:28.271 05:19:42 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:09:28.271 05:19:42 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.271 05:19:42 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.271 05:19:42 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:28.271 05:19:42 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:28.271 05:19:42 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.271 05:19:42 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.271 05:19:42 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.271 05:19:42 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.271 05:19:42 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.271 05:19:42 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:28.271 05:19:42 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.271 05:19:42 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:09:28.271 05:19:42 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:28.271 05:19:42 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:28.271 05:19:42 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:28.271 05:19:42 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.271 05:19:42 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.271 05:19:42 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:28.271 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:28.271 05:19:42 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:28.271 05:19:42 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:28.271 05:19:42 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:28.271 INFO: launching applications... 00:09:28.272 05:19:42 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:28.272 05:19:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:28.272 05:19:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:28.272 05:19:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:28.272 05:19:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:28.272 05:19:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:28.272 05:19:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:28.272 05:19:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:09:28.272 05:19:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:28.272 05:19:42 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:28.272 05:19:42 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:28.272 05:19:42 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:28.272 05:19:42 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:28.272 05:19:42 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:28.272 05:19:42 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:28.272 05:19:42 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:28.272 05:19:42 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:28.272 05:19:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:28.272 05:19:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:28.272 05:19:42 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58071 00:09:28.272 05:19:42 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:28.272 05:19:42 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:28.272 Waiting for target to run... 00:09:28.272 05:19:42 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58071 /var/tmp/spdk_tgt.sock 00:09:28.272 05:19:42 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 58071 ']' 00:09:28.272 05:19:42 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:28.272 05:19:42 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:28.272 05:19:42 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:28.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:28.272 05:19:42 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:28.272 05:19:42 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:28.530 [2024-11-20 05:19:42.813723] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:09:28.530 [2024-11-20 05:19:42.813863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58071 ] 00:09:28.789 [2024-11-20 05:19:43.127710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.789 [2024-11-20 05:19:43.166679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.789 [2024-11-20 05:19:43.197018] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:29.723 00:09:29.723 INFO: shutting down applications... 00:09:29.723 05:19:44 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:29.723 05:19:44 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:09:29.723 05:19:44 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:29.723 05:19:44 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:29.723 05:19:44 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:29.723 05:19:44 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:29.723 05:19:44 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:29.723 05:19:44 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58071 ]] 00:09:29.723 05:19:44 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58071 00:09:29.723 05:19:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:29.723 05:19:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:29.723 05:19:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58071 00:09:29.723 05:19:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:30.291 05:19:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:30.291 05:19:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:30.291 05:19:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58071 00:09:30.291 SPDK target shutdown done 00:09:30.291 Success 00:09:30.291 05:19:44 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:30.291 05:19:44 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:30.291 05:19:44 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:30.291 05:19:44 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:30.291 05:19:44 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:30.291 00:09:30.291 real 0m2.037s 00:09:30.291 user 0m2.145s 00:09:30.291 sys 0m0.360s 00:09:30.291 ************************************ 00:09:30.291 END TEST json_config_extra_key 00:09:30.291 ************************************ 00:09:30.291 05:19:44 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:30.291 05:19:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:30.291 05:19:44 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:30.291 05:19:44 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:30.291 05:19:44 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:30.291 05:19:44 -- common/autotest_common.sh@10 -- # set +x 00:09:30.291 ************************************ 00:09:30.291 START TEST alias_rpc 00:09:30.291 ************************************ 00:09:30.291 05:19:44 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:30.291 * Looking for test storage... 00:09:30.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:09:30.291 05:19:44 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:30.291 05:19:44 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:09:30.291 05:19:44 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:30.291 05:19:44 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:30.291 05:19:44 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.291 05:19:44 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.291 05:19:44 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.291 05:19:44 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.291 05:19:44 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.291 05:19:44 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.291 05:19:44 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.291 05:19:44 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.291 05:19:44 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.291 05:19:44 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.291 05:19:44 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.291 05:19:44 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:30.291 05:19:44 alias_rpc -- scripts/common.sh@345 -- # : 1 00:09:30.291 05:19:44 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.291 05:19:44 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.291 05:19:44 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:30.291 05:19:44 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:09:30.291 05:19:44 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.291 05:19:44 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:09:30.291 05:19:44 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.291 05:19:44 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:30.291 05:19:44 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:09:30.291 05:19:44 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.291 05:19:44 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:09:30.291 05:19:44 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.291 05:19:44 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.291 05:19:44 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.291 05:19:44 alias_rpc -- scripts/common.sh@368 -- # return 0 00:09:30.291 05:19:44 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.291 05:19:44 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:30.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.291 --rc genhtml_branch_coverage=1 00:09:30.291 --rc genhtml_function_coverage=1 00:09:30.291 --rc genhtml_legend=1 00:09:30.291 --rc geninfo_all_blocks=1 00:09:30.291 --rc geninfo_unexecuted_blocks=1 00:09:30.291 00:09:30.291 ' 00:09:30.291 05:19:44 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:30.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.291 --rc genhtml_branch_coverage=1 00:09:30.291 --rc genhtml_function_coverage=1 00:09:30.291 --rc genhtml_legend=1 00:09:30.291 --rc geninfo_all_blocks=1 00:09:30.291 --rc geninfo_unexecuted_blocks=1 00:09:30.291 00:09:30.291 ' 00:09:30.291 05:19:44 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:30.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.291 --rc genhtml_branch_coverage=1 00:09:30.291 --rc genhtml_function_coverage=1 00:09:30.291 --rc genhtml_legend=1 00:09:30.291 --rc geninfo_all_blocks=1 00:09:30.291 --rc geninfo_unexecuted_blocks=1 00:09:30.291 00:09:30.291 ' 00:09:30.291 05:19:44 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:30.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.291 --rc genhtml_branch_coverage=1 00:09:30.291 --rc genhtml_function_coverage=1 00:09:30.291 --rc genhtml_legend=1 00:09:30.291 --rc geninfo_all_blocks=1 00:09:30.291 --rc geninfo_unexecuted_blocks=1 00:09:30.291 00:09:30.291 ' 00:09:30.291 05:19:44 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:30.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.291 05:19:44 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58149 00:09:30.291 05:19:44 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:30.291 05:19:44 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58149 00:09:30.291 05:19:44 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 58149 ']' 00:09:30.291 05:19:44 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.291 05:19:44 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:30.291 05:19:44 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.291 05:19:44 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:30.291 05:19:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.550 [2024-11-20 05:19:44.850727] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:09:30.550 [2024-11-20 05:19:44.851278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58149 ] 00:09:30.550 [2024-11-20 05:19:44.993556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.550 [2024-11-20 05:19:45.029193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.808 [2024-11-20 05:19:45.073755] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:30.808 05:19:45 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:30.808 05:19:45 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:09:30.808 05:19:45 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:09:31.385 05:19:45 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58149 00:09:31.385 05:19:45 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 58149 ']' 00:09:31.385 05:19:45 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 58149 00:09:31.385 05:19:45 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:09:31.385 05:19:45 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:31.385 05:19:45 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58149 00:09:31.385 killing process with pid 58149 00:09:31.385 05:19:45 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:31.385 05:19:45 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:31.385 05:19:45 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58149' 00:09:31.385 05:19:45 alias_rpc -- common/autotest_common.sh@971 -- # kill 58149 00:09:31.385 05:19:45 alias_rpc -- common/autotest_common.sh@976 -- # wait 58149 00:09:31.385 ************************************ 00:09:31.385 END TEST alias_rpc 00:09:31.385 ************************************ 00:09:31.385 00:09:31.385 real 0m1.298s 00:09:31.385 user 0m1.562s 00:09:31.385 sys 0m0.356s 00:09:31.385 05:19:45 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:31.385 05:19:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.644 05:19:45 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:09:31.644 05:19:45 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:31.644 05:19:45 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:31.644 05:19:45 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:31.644 05:19:45 -- common/autotest_common.sh@10 -- # set +x 00:09:31.644 ************************************ 00:09:31.644 START TEST spdkcli_tcp 00:09:31.644 ************************************ 00:09:31.644 05:19:45 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:31.644 * Looking for test storage... 00:09:31.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:09:31.645 05:19:46 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:31.645 05:19:46 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:09:31.645 05:19:46 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:31.645 05:19:46 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:31.645 05:19:46 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:31.645 05:19:46 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:31.645 05:19:46 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:31.645 05:19:46 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.645 05:19:46 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:31.645 05:19:46 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:31.645 05:19:46 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:31.645 05:19:46 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:31.645 05:19:46 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:31.645 05:19:46 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:31.645 05:19:46 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:31.645 05:19:46 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:31.645 05:19:46 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:09:31.645 05:19:46 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:31.645 05:19:46 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.645 05:19:46 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:31.645 05:19:46 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:09:31.645 05:19:46 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.645 05:19:46 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:09:31.645 05:19:46 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:31.645 05:19:46 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:31.645 05:19:46 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:09:31.645 05:19:46 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.645 05:19:46 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:09:31.645 05:19:46 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:31.645 05:19:46 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:31.645 05:19:46 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:31.645 05:19:46 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:09:31.645 05:19:46 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.645 05:19:46 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:31.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.645 --rc genhtml_branch_coverage=1 00:09:31.645 --rc genhtml_function_coverage=1 00:09:31.645 --rc genhtml_legend=1 00:09:31.645 --rc geninfo_all_blocks=1 00:09:31.645 --rc geninfo_unexecuted_blocks=1 00:09:31.645 00:09:31.645 ' 00:09:31.645 05:19:46 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:31.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.645 --rc genhtml_branch_coverage=1 00:09:31.645 --rc genhtml_function_coverage=1 00:09:31.645 --rc genhtml_legend=1 00:09:31.645 --rc geninfo_all_blocks=1 00:09:31.645 --rc geninfo_unexecuted_blocks=1 00:09:31.645 00:09:31.645 ' 00:09:31.645 05:19:46 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:31.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.645 --rc genhtml_branch_coverage=1 00:09:31.645 --rc genhtml_function_coverage=1 00:09:31.645 --rc genhtml_legend=1 00:09:31.645 --rc geninfo_all_blocks=1 00:09:31.645 --rc geninfo_unexecuted_blocks=1 00:09:31.645 00:09:31.645 ' 00:09:31.645 05:19:46 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:31.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.645 --rc genhtml_branch_coverage=1 00:09:31.645 --rc genhtml_function_coverage=1 00:09:31.645 --rc genhtml_legend=1 00:09:31.645 --rc geninfo_all_blocks=1 00:09:31.645 --rc geninfo_unexecuted_blocks=1 00:09:31.645 00:09:31.645 ' 00:09:31.645 05:19:46 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:09:31.645 05:19:46 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:09:31.645 05:19:46 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:09:31.645 05:19:46 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:31.645 05:19:46 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:31.645 05:19:46 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:31.645 05:19:46 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:31.645 05:19:46 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:31.645 05:19:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:31.645 05:19:46 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58225 00:09:31.645 05:19:46 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:31.645 05:19:46 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58225 00:09:31.645 05:19:46 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 58225 ']' 00:09:31.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.645 05:19:46 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.645 05:19:46 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:31.645 05:19:46 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.645 05:19:46 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:31.645 05:19:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:31.904 [2024-11-20 05:19:46.231462] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:09:31.904 [2024-11-20 05:19:46.231812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58225 ] 00:09:31.904 [2024-11-20 05:19:46.385791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:32.162 [2024-11-20 05:19:46.422637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.162 [2024-11-20 05:19:46.422650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.162 [2024-11-20 05:19:46.463861] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:32.730 05:19:47 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:32.730 05:19:47 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:09:32.730 05:19:47 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58242 00:09:32.730 05:19:47 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:32.730 05:19:47 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:33.298 [ 00:09:33.298 "bdev_malloc_delete", 00:09:33.298 "bdev_malloc_create", 00:09:33.298 "bdev_null_resize", 00:09:33.298 "bdev_null_delete", 00:09:33.298 "bdev_null_create", 00:09:33.298 "bdev_nvme_cuse_unregister", 00:09:33.298 "bdev_nvme_cuse_register", 00:09:33.298 "bdev_opal_new_user", 00:09:33.298 "bdev_opal_set_lock_state", 00:09:33.298 "bdev_opal_delete", 00:09:33.298 "bdev_opal_get_info", 00:09:33.298 "bdev_opal_create", 00:09:33.298 "bdev_nvme_opal_revert", 00:09:33.298 "bdev_nvme_opal_init", 00:09:33.298 "bdev_nvme_send_cmd", 00:09:33.298 "bdev_nvme_set_keys", 00:09:33.298 "bdev_nvme_get_path_iostat", 00:09:33.298 "bdev_nvme_get_mdns_discovery_info", 00:09:33.298 "bdev_nvme_stop_mdns_discovery", 00:09:33.298 "bdev_nvme_start_mdns_discovery", 00:09:33.298 "bdev_nvme_set_multipath_policy", 00:09:33.298 "bdev_nvme_set_preferred_path", 00:09:33.298 "bdev_nvme_get_io_paths", 00:09:33.298 "bdev_nvme_remove_error_injection", 00:09:33.298 "bdev_nvme_add_error_injection", 00:09:33.298 "bdev_nvme_get_discovery_info", 00:09:33.298 "bdev_nvme_stop_discovery", 00:09:33.298 "bdev_nvme_start_discovery", 00:09:33.298 "bdev_nvme_get_controller_health_info", 00:09:33.298 "bdev_nvme_disable_controller", 00:09:33.298 "bdev_nvme_enable_controller", 00:09:33.298 "bdev_nvme_reset_controller", 00:09:33.298 "bdev_nvme_get_transport_statistics", 00:09:33.298 "bdev_nvme_apply_firmware", 00:09:33.298 "bdev_nvme_detach_controller", 00:09:33.298 "bdev_nvme_get_controllers", 00:09:33.298 "bdev_nvme_attach_controller", 00:09:33.298 "bdev_nvme_set_hotplug", 00:09:33.298 "bdev_nvme_set_options", 00:09:33.298 "bdev_passthru_delete", 00:09:33.298 "bdev_passthru_create", 00:09:33.298 "bdev_lvol_set_parent_bdev", 00:09:33.298 "bdev_lvol_set_parent", 00:09:33.298 "bdev_lvol_check_shallow_copy", 00:09:33.298 "bdev_lvol_start_shallow_copy", 00:09:33.298 "bdev_lvol_grow_lvstore", 00:09:33.298 "bdev_lvol_get_lvols", 00:09:33.298 "bdev_lvol_get_lvstores", 00:09:33.298 "bdev_lvol_delete", 00:09:33.298 "bdev_lvol_set_read_only", 00:09:33.298 "bdev_lvol_resize", 00:09:33.298 "bdev_lvol_decouple_parent", 00:09:33.298 "bdev_lvol_inflate", 00:09:33.298 "bdev_lvol_rename", 00:09:33.298 "bdev_lvol_clone_bdev", 00:09:33.298 "bdev_lvol_clone", 00:09:33.298 "bdev_lvol_snapshot", 00:09:33.298 "bdev_lvol_create", 00:09:33.298 "bdev_lvol_delete_lvstore", 00:09:33.298 "bdev_lvol_rename_lvstore", 00:09:33.298 "bdev_lvol_create_lvstore", 00:09:33.298 "bdev_raid_set_options", 00:09:33.298 "bdev_raid_remove_base_bdev", 00:09:33.298 "bdev_raid_add_base_bdev", 00:09:33.298 "bdev_raid_delete", 00:09:33.298 "bdev_raid_create", 00:09:33.298 "bdev_raid_get_bdevs", 00:09:33.298 "bdev_error_inject_error", 00:09:33.298 "bdev_error_delete", 00:09:33.298 "bdev_error_create", 00:09:33.298 "bdev_split_delete", 00:09:33.298 "bdev_split_create", 00:09:33.298 "bdev_delay_delete", 00:09:33.298 "bdev_delay_create", 00:09:33.298 "bdev_delay_update_latency", 00:09:33.298 "bdev_zone_block_delete", 00:09:33.298 "bdev_zone_block_create", 00:09:33.298 "blobfs_create", 00:09:33.298 "blobfs_detect", 00:09:33.298 "blobfs_set_cache_size", 00:09:33.298 "bdev_aio_delete", 00:09:33.298 "bdev_aio_rescan", 00:09:33.298 "bdev_aio_create", 00:09:33.298 "bdev_ftl_set_property", 00:09:33.298 "bdev_ftl_get_properties", 00:09:33.299 "bdev_ftl_get_stats", 00:09:33.299 "bdev_ftl_unmap", 00:09:33.299 "bdev_ftl_unload", 00:09:33.299 "bdev_ftl_delete", 00:09:33.299 "bdev_ftl_load", 00:09:33.299 "bdev_ftl_create", 00:09:33.299 "bdev_virtio_attach_controller", 00:09:33.299 "bdev_virtio_scsi_get_devices", 00:09:33.299 "bdev_virtio_detach_controller", 00:09:33.299 "bdev_virtio_blk_set_hotplug", 00:09:33.299 "bdev_iscsi_delete", 00:09:33.299 "bdev_iscsi_create", 00:09:33.299 "bdev_iscsi_set_options", 00:09:33.299 "bdev_uring_delete", 00:09:33.299 "bdev_uring_rescan", 00:09:33.299 "bdev_uring_create", 00:09:33.299 "accel_error_inject_error", 00:09:33.299 "ioat_scan_accel_module", 00:09:33.299 "dsa_scan_accel_module", 00:09:33.299 "iaa_scan_accel_module", 00:09:33.299 "keyring_file_remove_key", 00:09:33.299 "keyring_file_add_key", 00:09:33.299 "keyring_linux_set_options", 00:09:33.299 "fsdev_aio_delete", 00:09:33.299 "fsdev_aio_create", 00:09:33.299 "iscsi_get_histogram", 00:09:33.299 "iscsi_enable_histogram", 00:09:33.299 "iscsi_set_options", 00:09:33.299 "iscsi_get_auth_groups", 00:09:33.299 "iscsi_auth_group_remove_secret", 00:09:33.299 "iscsi_auth_group_add_secret", 00:09:33.299 "iscsi_delete_auth_group", 00:09:33.299 "iscsi_create_auth_group", 00:09:33.299 "iscsi_set_discovery_auth", 00:09:33.299 "iscsi_get_options", 00:09:33.299 "iscsi_target_node_request_logout", 00:09:33.299 "iscsi_target_node_set_redirect", 00:09:33.299 "iscsi_target_node_set_auth", 00:09:33.299 "iscsi_target_node_add_lun", 00:09:33.299 "iscsi_get_stats", 00:09:33.299 "iscsi_get_connections", 00:09:33.299 "iscsi_portal_group_set_auth", 00:09:33.299 "iscsi_start_portal_group", 00:09:33.299 "iscsi_delete_portal_group", 00:09:33.299 "iscsi_create_portal_group", 00:09:33.299 "iscsi_get_portal_groups", 00:09:33.299 "iscsi_delete_target_node", 00:09:33.299 "iscsi_target_node_remove_pg_ig_maps", 00:09:33.299 "iscsi_target_node_add_pg_ig_maps", 00:09:33.299 "iscsi_create_target_node", 00:09:33.299 "iscsi_get_target_nodes", 00:09:33.299 "iscsi_delete_initiator_group", 00:09:33.299 "iscsi_initiator_group_remove_initiators", 00:09:33.299 "iscsi_initiator_group_add_initiators", 00:09:33.299 "iscsi_create_initiator_group", 00:09:33.299 "iscsi_get_initiator_groups", 00:09:33.299 "nvmf_set_crdt", 00:09:33.299 "nvmf_set_config", 00:09:33.299 "nvmf_set_max_subsystems", 00:09:33.299 "nvmf_stop_mdns_prr", 00:09:33.299 "nvmf_publish_mdns_prr", 00:09:33.299 "nvmf_subsystem_get_listeners", 00:09:33.299 "nvmf_subsystem_get_qpairs", 00:09:33.299 "nvmf_subsystem_get_controllers", 00:09:33.299 "nvmf_get_stats", 00:09:33.299 "nvmf_get_transports", 00:09:33.299 "nvmf_create_transport", 00:09:33.299 "nvmf_get_targets", 00:09:33.299 "nvmf_delete_target", 00:09:33.299 "nvmf_create_target", 00:09:33.299 "nvmf_subsystem_allow_any_host", 00:09:33.299 "nvmf_subsystem_set_keys", 00:09:33.299 "nvmf_subsystem_remove_host", 00:09:33.299 "nvmf_subsystem_add_host", 00:09:33.299 "nvmf_ns_remove_host", 00:09:33.299 "nvmf_ns_add_host", 00:09:33.299 "nvmf_subsystem_remove_ns", 00:09:33.299 "nvmf_subsystem_set_ns_ana_group", 00:09:33.299 "nvmf_subsystem_add_ns", 00:09:33.299 "nvmf_subsystem_listener_set_ana_state", 00:09:33.299 "nvmf_discovery_get_referrals", 00:09:33.299 "nvmf_discovery_remove_referral", 00:09:33.299 "nvmf_discovery_add_referral", 00:09:33.299 "nvmf_subsystem_remove_listener", 00:09:33.299 "nvmf_subsystem_add_listener", 00:09:33.299 "nvmf_delete_subsystem", 00:09:33.299 "nvmf_create_subsystem", 00:09:33.299 "nvmf_get_subsystems", 00:09:33.299 "env_dpdk_get_mem_stats", 00:09:33.299 "nbd_get_disks", 00:09:33.299 "nbd_stop_disk", 00:09:33.299 "nbd_start_disk", 00:09:33.299 "ublk_recover_disk", 00:09:33.299 "ublk_get_disks", 00:09:33.299 "ublk_stop_disk", 00:09:33.299 "ublk_start_disk", 00:09:33.299 "ublk_destroy_target", 00:09:33.299 "ublk_create_target", 00:09:33.299 "virtio_blk_create_transport", 00:09:33.299 "virtio_blk_get_transports", 00:09:33.299 "vhost_controller_set_coalescing", 00:09:33.299 "vhost_get_controllers", 00:09:33.299 "vhost_delete_controller", 00:09:33.299 "vhost_create_blk_controller", 00:09:33.299 "vhost_scsi_controller_remove_target", 00:09:33.299 "vhost_scsi_controller_add_target", 00:09:33.299 "vhost_start_scsi_controller", 00:09:33.299 "vhost_create_scsi_controller", 00:09:33.299 "thread_set_cpumask", 00:09:33.299 "scheduler_set_options", 00:09:33.299 "framework_get_governor", 00:09:33.299 "framework_get_scheduler", 00:09:33.299 "framework_set_scheduler", 00:09:33.299 "framework_get_reactors", 00:09:33.299 "thread_get_io_channels", 00:09:33.299 "thread_get_pollers", 00:09:33.299 "thread_get_stats", 00:09:33.299 "framework_monitor_context_switch", 00:09:33.299 "spdk_kill_instance", 00:09:33.299 "log_enable_timestamps", 00:09:33.299 "log_get_flags", 00:09:33.299 "log_clear_flag", 00:09:33.299 "log_set_flag", 00:09:33.299 "log_get_level", 00:09:33.299 "log_set_level", 00:09:33.299 "log_get_print_level", 00:09:33.299 "log_set_print_level", 00:09:33.299 "framework_enable_cpumask_locks", 00:09:33.299 "framework_disable_cpumask_locks", 00:09:33.299 "framework_wait_init", 00:09:33.299 "framework_start_init", 00:09:33.299 "scsi_get_devices", 00:09:33.299 "bdev_get_histogram", 00:09:33.299 "bdev_enable_histogram", 00:09:33.299 "bdev_set_qos_limit", 00:09:33.299 "bdev_set_qd_sampling_period", 00:09:33.299 "bdev_get_bdevs", 00:09:33.299 "bdev_reset_iostat", 00:09:33.299 "bdev_get_iostat", 00:09:33.299 "bdev_examine", 00:09:33.299 "bdev_wait_for_examine", 00:09:33.299 "bdev_set_options", 00:09:33.299 "accel_get_stats", 00:09:33.299 "accel_set_options", 00:09:33.299 "accel_set_driver", 00:09:33.299 "accel_crypto_key_destroy", 00:09:33.299 "accel_crypto_keys_get", 00:09:33.299 "accel_crypto_key_create", 00:09:33.299 "accel_assign_opc", 00:09:33.299 "accel_get_module_info", 00:09:33.299 "accel_get_opc_assignments", 00:09:33.299 "vmd_rescan", 00:09:33.299 "vmd_remove_device", 00:09:33.299 "vmd_enable", 00:09:33.299 "sock_get_default_impl", 00:09:33.299 "sock_set_default_impl", 00:09:33.299 "sock_impl_set_options", 00:09:33.299 "sock_impl_get_options", 00:09:33.299 "iobuf_get_stats", 00:09:33.299 "iobuf_set_options", 00:09:33.299 "keyring_get_keys", 00:09:33.299 "framework_get_pci_devices", 00:09:33.299 "framework_get_config", 00:09:33.299 "framework_get_subsystems", 00:09:33.299 "fsdev_set_opts", 00:09:33.299 "fsdev_get_opts", 00:09:33.299 "trace_get_info", 00:09:33.299 "trace_get_tpoint_group_mask", 00:09:33.299 "trace_disable_tpoint_group", 00:09:33.299 "trace_enable_tpoint_group", 00:09:33.299 "trace_clear_tpoint_mask", 00:09:33.299 "trace_set_tpoint_mask", 00:09:33.299 "notify_get_notifications", 00:09:33.299 "notify_get_types", 00:09:33.299 "spdk_get_version", 00:09:33.299 "rpc_get_methods" 00:09:33.299 ] 00:09:33.299 05:19:47 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:33.299 05:19:47 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:33.299 05:19:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:33.299 05:19:47 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:33.299 05:19:47 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58225 00:09:33.299 05:19:47 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 58225 ']' 00:09:33.299 05:19:47 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 58225 00:09:33.299 05:19:47 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:09:33.299 05:19:47 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:33.299 05:19:47 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58225 00:09:33.299 killing process with pid 58225 00:09:33.299 05:19:47 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:33.299 05:19:47 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:33.299 05:19:47 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58225' 00:09:33.299 05:19:47 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 58225 00:09:33.299 05:19:47 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 58225 00:09:33.558 ************************************ 00:09:33.558 END TEST spdkcli_tcp 00:09:33.558 ************************************ 00:09:33.558 00:09:33.558 real 0m1.896s 00:09:33.558 user 0m3.630s 00:09:33.558 sys 0m0.389s 00:09:33.558 05:19:47 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:33.558 05:19:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:33.558 05:19:47 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:33.558 05:19:47 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:33.558 05:19:47 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:33.558 05:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:33.558 ************************************ 00:09:33.558 START TEST dpdk_mem_utility 00:09:33.558 ************************************ 00:09:33.558 05:19:47 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:33.558 * Looking for test storage... 00:09:33.558 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:33.558 05:19:47 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:33.558 05:19:47 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:09:33.558 05:19:47 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:33.558 05:19:48 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:33.558 05:19:48 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.558 05:19:48 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.558 05:19:48 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.558 05:19:48 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.558 05:19:48 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.558 05:19:48 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.558 05:19:48 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.558 05:19:48 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.558 05:19:48 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.558 05:19:48 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.558 05:19:48 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.558 05:19:48 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:09:33.558 05:19:48 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:09:33.558 05:19:48 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.558 05:19:48 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.558 05:19:48 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:09:33.558 05:19:48 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:09:33.558 05:19:48 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.558 05:19:48 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:09:33.558 05:19:48 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.558 05:19:48 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:09:33.558 05:19:48 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:09:33.558 05:19:48 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.559 05:19:48 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:09:33.817 05:19:48 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.817 05:19:48 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.817 05:19:48 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.817 05:19:48 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:09:33.817 05:19:48 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.817 05:19:48 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:33.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.817 --rc genhtml_branch_coverage=1 00:09:33.817 --rc genhtml_function_coverage=1 00:09:33.817 --rc genhtml_legend=1 00:09:33.817 --rc geninfo_all_blocks=1 00:09:33.817 --rc geninfo_unexecuted_blocks=1 00:09:33.817 00:09:33.817 ' 00:09:33.817 05:19:48 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:33.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.817 --rc genhtml_branch_coverage=1 00:09:33.817 --rc genhtml_function_coverage=1 00:09:33.817 --rc genhtml_legend=1 00:09:33.817 --rc geninfo_all_blocks=1 00:09:33.817 --rc geninfo_unexecuted_blocks=1 00:09:33.817 00:09:33.818 ' 00:09:33.818 05:19:48 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:33.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.818 --rc genhtml_branch_coverage=1 00:09:33.818 --rc genhtml_function_coverage=1 00:09:33.818 --rc genhtml_legend=1 00:09:33.818 --rc geninfo_all_blocks=1 00:09:33.818 --rc geninfo_unexecuted_blocks=1 00:09:33.818 00:09:33.818 ' 00:09:33.818 05:19:48 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:33.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.818 --rc genhtml_branch_coverage=1 00:09:33.818 --rc genhtml_function_coverage=1 00:09:33.818 --rc genhtml_legend=1 00:09:33.818 --rc geninfo_all_blocks=1 00:09:33.818 --rc geninfo_unexecuted_blocks=1 00:09:33.818 00:09:33.818 ' 00:09:33.818 05:19:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:33.818 05:19:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58319 00:09:33.818 05:19:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:33.818 05:19:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58319 00:09:33.818 05:19:48 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 58319 ']' 00:09:33.818 05:19:48 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.818 05:19:48 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:33.818 05:19:48 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.818 05:19:48 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:33.818 05:19:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:33.818 [2024-11-20 05:19:48.148774] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:09:33.818 [2024-11-20 05:19:48.149216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58319 ] 00:09:33.818 [2024-11-20 05:19:48.302378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.076 [2024-11-20 05:19:48.336792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.076 [2024-11-20 05:19:48.377602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:35.016 05:19:49 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:35.016 05:19:49 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:09:35.016 05:19:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:35.016 05:19:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:35.016 05:19:49 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.016 05:19:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:35.016 { 00:09:35.016 "filename": "/tmp/spdk_mem_dump.txt" 00:09:35.016 } 00:09:35.016 05:19:49 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.016 05:19:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:35.016 DPDK memory size 818.000000 MiB in 1 heap(s) 00:09:35.016 1 heaps totaling size 818.000000 MiB 00:09:35.016 size: 818.000000 MiB heap id: 0 00:09:35.016 end heaps---------- 00:09:35.016 9 mempools totaling size 603.782043 MiB 00:09:35.016 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:35.016 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:35.016 size: 100.555481 MiB name: bdev_io_58319 00:09:35.016 size: 50.003479 MiB name: msgpool_58319 00:09:35.016 size: 36.509338 MiB name: fsdev_io_58319 00:09:35.016 size: 21.763794 MiB name: PDU_Pool 00:09:35.016 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:35.016 size: 4.133484 MiB name: evtpool_58319 00:09:35.016 size: 0.026123 MiB name: Session_Pool 00:09:35.016 end mempools------- 00:09:35.016 6 memzones totaling size 4.142822 MiB 00:09:35.016 size: 1.000366 MiB name: RG_ring_0_58319 00:09:35.016 size: 1.000366 MiB name: RG_ring_1_58319 00:09:35.016 size: 1.000366 MiB name: RG_ring_4_58319 00:09:35.016 size: 1.000366 MiB name: RG_ring_5_58319 00:09:35.016 size: 0.125366 MiB name: RG_ring_2_58319 00:09:35.016 size: 0.015991 MiB name: RG_ring_3_58319 00:09:35.016 end memzones------- 00:09:35.016 05:19:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:35.016 heap id: 0 total size: 818.000000 MiB number of busy elements: 317 number of free elements: 15 00:09:35.016 list of free elements. size: 10.802490 MiB 00:09:35.016 element at address: 0x200019200000 with size: 0.999878 MiB 00:09:35.016 element at address: 0x200019400000 with size: 0.999878 MiB 00:09:35.016 element at address: 0x200032000000 with size: 0.994446 MiB 00:09:35.016 element at address: 0x200000400000 with size: 0.993958 MiB 00:09:35.016 element at address: 0x200006400000 with size: 0.959839 MiB 00:09:35.016 element at address: 0x200012c00000 with size: 0.944275 MiB 00:09:35.016 element at address: 0x200019600000 with size: 0.936584 MiB 00:09:35.016 element at address: 0x200000200000 with size: 0.717346 MiB 00:09:35.016 element at address: 0x20001ae00000 with size: 0.567688 MiB 00:09:35.016 element at address: 0x20000a600000 with size: 0.488892 MiB 00:09:35.016 element at address: 0x200000c00000 with size: 0.486267 MiB 00:09:35.016 element at address: 0x200019800000 with size: 0.485657 MiB 00:09:35.016 element at address: 0x200003e00000 with size: 0.480286 MiB 00:09:35.016 element at address: 0x200028200000 with size: 0.395752 MiB 00:09:35.016 element at address: 0x200000800000 with size: 0.351746 MiB 00:09:35.016 list of standard malloc elements. size: 199.268616 MiB 00:09:35.016 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:09:35.016 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:09:35.016 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:09:35.016 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:09:35.016 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:09:35.016 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:35.016 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:09:35.016 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:35.016 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:09:35.016 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:09:35.016 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:09:35.016 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:09:35.016 element at address: 0x20000085e580 with size: 0.000183 MiB 00:09:35.016 element at address: 0x20000087e840 with size: 0.000183 MiB 00:09:35.016 element at address: 0x20000087e900 with size: 0.000183 MiB 00:09:35.016 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:09:35.016 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:09:35.016 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:09:35.016 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:09:35.016 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:09:35.016 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:09:35.016 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:09:35.016 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:09:35.016 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:09:35.016 element at address: 0x20000087f080 with size: 0.000183 MiB 00:09:35.016 element at address: 0x20000087f140 with size: 0.000183 MiB 00:09:35.016 element at address: 0x20000087f200 with size: 0.000183 MiB 00:09:35.016 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:09:35.016 element at address: 0x20000087f380 with size: 0.000183 MiB 00:09:35.016 element at address: 0x20000087f440 with size: 0.000183 MiB 00:09:35.016 element at address: 0x20000087f500 with size: 0.000183 MiB 00:09:35.016 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:09:35.016 element at address: 0x20000087f680 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:09:35.016 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:09:35.016 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:09:35.016 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:09:35.016 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:09:35.016 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:09:35.016 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:09:35.016 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:09:35.016 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:09:35.016 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:09:35.016 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:09:35.016 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:09:35.016 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:09:35.016 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:09:35.016 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:09:35.016 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:09:35.016 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000cff000 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200003efb980 with size: 0.000183 MiB 00:09:35.017 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:09:35.017 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:09:35.017 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:09:35.017 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae91540 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:09:35.017 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:09:35.018 element at address: 0x200028265500 with size: 0.000183 MiB 00:09:35.018 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826c480 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826c540 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826c600 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826c780 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826c840 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826c900 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826d080 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826d140 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826d200 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826d380 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826d440 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826d500 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826d680 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826d740 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826d800 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826d980 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826da40 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826db00 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826de00 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826df80 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826e040 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826e100 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826e280 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826e340 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826e400 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826e580 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826e640 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826e700 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826e880 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826e940 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826f000 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826f180 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826f240 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826f300 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826f480 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826f540 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826f600 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826f780 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826f840 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826f900 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:09:35.018 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:09:35.018 list of memzone associated elements. size: 607.928894 MiB 00:09:35.018 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:09:35.018 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:35.018 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:09:35.018 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:35.018 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:09:35.018 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58319_0 00:09:35.018 element at address: 0x200000dff380 with size: 48.003052 MiB 00:09:35.018 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58319_0 00:09:35.018 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:09:35.018 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58319_0 00:09:35.018 element at address: 0x2000199be940 with size: 20.255554 MiB 00:09:35.018 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:35.018 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:09:35.018 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:35.018 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:09:35.018 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58319_0 00:09:35.018 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:09:35.018 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58319 00:09:35.018 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:09:35.018 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58319 00:09:35.018 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:09:35.018 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:35.018 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:09:35.018 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:35.018 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:09:35.018 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:35.018 element at address: 0x200003efba40 with size: 1.008118 MiB 00:09:35.018 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:35.018 element at address: 0x200000cff180 with size: 1.000488 MiB 00:09:35.018 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58319 00:09:35.018 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:09:35.018 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58319 00:09:35.018 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:09:35.018 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58319 00:09:35.018 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:09:35.018 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58319 00:09:35.018 element at address: 0x20000087f740 with size: 0.500488 MiB 00:09:35.018 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58319 00:09:35.018 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:09:35.018 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58319 00:09:35.018 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:09:35.018 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:35.018 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:09:35.018 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:35.018 element at address: 0x20001987c540 with size: 0.250488 MiB 00:09:35.018 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:35.018 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:09:35.018 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58319 00:09:35.018 element at address: 0x20000085e640 with size: 0.125488 MiB 00:09:35.018 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58319 00:09:35.018 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:09:35.018 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:35.018 element at address: 0x200028265680 with size: 0.023743 MiB 00:09:35.018 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:35.018 element at address: 0x20000085a380 with size: 0.016113 MiB 00:09:35.018 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58319 00:09:35.018 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:09:35.018 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:35.018 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:09:35.019 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58319 00:09:35.019 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:09:35.019 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58319 00:09:35.019 element at address: 0x20000085a180 with size: 0.000305 MiB 00:09:35.019 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58319 00:09:35.019 element at address: 0x20002826c280 with size: 0.000305 MiB 00:09:35.019 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:35.019 05:19:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:35.019 05:19:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58319 00:09:35.019 05:19:49 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 58319 ']' 00:09:35.019 05:19:49 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 58319 00:09:35.019 05:19:49 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:09:35.019 05:19:49 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:35.019 05:19:49 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58319 00:09:35.019 killing process with pid 58319 00:09:35.019 05:19:49 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:35.019 05:19:49 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:35.019 05:19:49 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58319' 00:09:35.019 05:19:49 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 58319 00:09:35.019 05:19:49 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 58319 00:09:35.278 00:09:35.278 real 0m1.786s 00:09:35.278 user 0m2.154s 00:09:35.278 sys 0m0.334s 00:09:35.278 05:19:49 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:35.278 ************************************ 00:09:35.278 END TEST dpdk_mem_utility 00:09:35.278 ************************************ 00:09:35.278 05:19:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:35.278 05:19:49 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:35.278 05:19:49 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:35.278 05:19:49 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:35.278 05:19:49 -- common/autotest_common.sh@10 -- # set +x 00:09:35.278 ************************************ 00:09:35.278 START TEST event 00:09:35.278 ************************************ 00:09:35.278 05:19:49 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:35.278 * Looking for test storage... 00:09:35.278 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:35.278 05:19:49 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:35.278 05:19:49 event -- common/autotest_common.sh@1691 -- # lcov --version 00:09:35.278 05:19:49 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:35.537 05:19:49 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:35.537 05:19:49 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.537 05:19:49 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.537 05:19:49 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.537 05:19:49 event -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.537 05:19:49 event -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.537 05:19:49 event -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.537 05:19:49 event -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.537 05:19:49 event -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.537 05:19:49 event -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.537 05:19:49 event -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.537 05:19:49 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.537 05:19:49 event -- scripts/common.sh@344 -- # case "$op" in 00:09:35.537 05:19:49 event -- scripts/common.sh@345 -- # : 1 00:09:35.537 05:19:49 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.537 05:19:49 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.537 05:19:49 event -- scripts/common.sh@365 -- # decimal 1 00:09:35.537 05:19:49 event -- scripts/common.sh@353 -- # local d=1 00:09:35.537 05:19:49 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.537 05:19:49 event -- scripts/common.sh@355 -- # echo 1 00:09:35.537 05:19:49 event -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.537 05:19:49 event -- scripts/common.sh@366 -- # decimal 2 00:09:35.537 05:19:49 event -- scripts/common.sh@353 -- # local d=2 00:09:35.537 05:19:49 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.537 05:19:49 event -- scripts/common.sh@355 -- # echo 2 00:09:35.537 05:19:49 event -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.537 05:19:49 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.537 05:19:49 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.537 05:19:49 event -- scripts/common.sh@368 -- # return 0 00:09:35.537 05:19:49 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.537 05:19:49 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:35.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.537 --rc genhtml_branch_coverage=1 00:09:35.537 --rc genhtml_function_coverage=1 00:09:35.537 --rc genhtml_legend=1 00:09:35.537 --rc geninfo_all_blocks=1 00:09:35.537 --rc geninfo_unexecuted_blocks=1 00:09:35.537 00:09:35.537 ' 00:09:35.537 05:19:49 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:35.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.537 --rc genhtml_branch_coverage=1 00:09:35.537 --rc genhtml_function_coverage=1 00:09:35.537 --rc genhtml_legend=1 00:09:35.537 --rc geninfo_all_blocks=1 00:09:35.537 --rc geninfo_unexecuted_blocks=1 00:09:35.537 00:09:35.537 ' 00:09:35.537 05:19:49 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:35.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.537 --rc genhtml_branch_coverage=1 00:09:35.537 --rc genhtml_function_coverage=1 00:09:35.537 --rc genhtml_legend=1 00:09:35.537 --rc geninfo_all_blocks=1 00:09:35.537 --rc geninfo_unexecuted_blocks=1 00:09:35.537 00:09:35.537 ' 00:09:35.537 05:19:49 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:35.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.537 --rc genhtml_branch_coverage=1 00:09:35.537 --rc genhtml_function_coverage=1 00:09:35.537 --rc genhtml_legend=1 00:09:35.537 --rc geninfo_all_blocks=1 00:09:35.537 --rc geninfo_unexecuted_blocks=1 00:09:35.537 00:09:35.537 ' 00:09:35.537 05:19:49 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:35.537 05:19:49 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:35.537 05:19:49 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:35.537 05:19:49 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:09:35.537 05:19:49 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:35.537 05:19:49 event -- common/autotest_common.sh@10 -- # set +x 00:09:35.537 ************************************ 00:09:35.537 START TEST event_perf 00:09:35.537 ************************************ 00:09:35.537 05:19:49 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:35.537 Running I/O for 1 seconds...[2024-11-20 05:19:49.935410] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:09:35.537 [2024-11-20 05:19:49.935753] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58404 ] 00:09:35.796 [2024-11-20 05:19:50.088533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:35.796 [2024-11-20 05:19:50.144312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.796 Running I/O for 1 seconds...[2024-11-20 05:19:50.144447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.796 [2024-11-20 05:19:50.144541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:35.796 [2024-11-20 05:19:50.144551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.730 00:09:36.730 lcore 0: 174280 00:09:36.730 lcore 1: 174282 00:09:36.730 lcore 2: 174279 00:09:36.730 lcore 3: 174277 00:09:36.730 done. 00:09:36.730 00:09:36.730 real 0m1.288s 00:09:36.730 user 0m4.095s 00:09:36.730 sys 0m0.050s 00:09:36.730 05:19:51 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:36.730 05:19:51 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:36.730 ************************************ 00:09:36.730 END TEST event_perf 00:09:36.730 ************************************ 00:09:36.730 05:19:51 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:36.730 05:19:51 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:36.730 05:19:51 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:36.730 05:19:51 event -- common/autotest_common.sh@10 -- # set +x 00:09:36.988 ************************************ 00:09:36.988 START TEST event_reactor 00:09:36.988 ************************************ 00:09:36.988 05:19:51 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:36.988 [2024-11-20 05:19:51.264508] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:09:36.988 [2024-11-20 05:19:51.264854] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58437 ] 00:09:36.988 [2024-11-20 05:19:51.404530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.988 [2024-11-20 05:19:51.444803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.362 test_start 00:09:38.362 oneshot 00:09:38.362 tick 100 00:09:38.362 tick 100 00:09:38.362 tick 250 00:09:38.362 tick 100 00:09:38.362 tick 100 00:09:38.362 tick 100 00:09:38.362 tick 250 00:09:38.362 tick 500 00:09:38.362 tick 100 00:09:38.362 tick 100 00:09:38.362 tick 250 00:09:38.362 tick 100 00:09:38.362 tick 100 00:09:38.362 test_end 00:09:38.362 00:09:38.362 real 0m1.237s 00:09:38.362 user 0m1.096s 00:09:38.362 sys 0m0.029s 00:09:38.362 05:19:52 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:38.362 05:19:52 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:38.362 ************************************ 00:09:38.362 END TEST event_reactor 00:09:38.362 ************************************ 00:09:38.362 05:19:52 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:38.362 05:19:52 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:38.362 05:19:52 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:38.362 05:19:52 event -- common/autotest_common.sh@10 -- # set +x 00:09:38.362 ************************************ 00:09:38.362 START TEST event_reactor_perf 00:09:38.362 ************************************ 00:09:38.362 05:19:52 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:38.362 [2024-11-20 05:19:52.543112] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:09:38.362 [2024-11-20 05:19:52.543412] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58472 ] 00:09:38.362 [2024-11-20 05:19:52.685804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.362 [2024-11-20 05:19:52.725037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.356 test_start 00:09:39.356 test_end 00:09:39.356 Performance: 348666 events per second 00:09:39.356 00:09:39.356 real 0m1.248s 00:09:39.356 user 0m1.101s 00:09:39.356 sys 0m0.038s 00:09:39.356 05:19:53 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:39.356 05:19:53 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:39.356 ************************************ 00:09:39.356 END TEST event_reactor_perf 00:09:39.356 ************************************ 00:09:39.356 05:19:53 event -- event/event.sh@49 -- # uname -s 00:09:39.356 05:19:53 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:39.356 05:19:53 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:39.356 05:19:53 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:39.356 05:19:53 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:39.356 05:19:53 event -- common/autotest_common.sh@10 -- # set +x 00:09:39.356 ************************************ 00:09:39.356 START TEST event_scheduler 00:09:39.356 ************************************ 00:09:39.356 05:19:53 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:39.616 * Looking for test storage... 00:09:39.616 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:39.616 05:19:53 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:39.616 05:19:53 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:09:39.616 05:19:53 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:39.616 05:19:54 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:39.616 05:19:54 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.616 05:19:54 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.616 05:19:54 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.616 05:19:54 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.616 05:19:54 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.616 05:19:54 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.616 05:19:54 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.616 05:19:54 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.616 05:19:54 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.616 05:19:54 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.616 05:19:54 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.616 05:19:54 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:09:39.616 05:19:54 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:09:39.616 05:19:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.616 05:19:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.616 05:19:54 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:09:39.616 05:19:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:09:39.616 05:19:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.616 05:19:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:09:39.616 05:19:54 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.616 05:19:54 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:09:39.616 05:19:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:09:39.616 05:19:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.616 05:19:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:09:39.616 05:19:54 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.616 05:19:54 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.616 05:19:54 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.616 05:19:54 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:09:39.616 05:19:54 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.616 05:19:54 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:39.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.616 --rc genhtml_branch_coverage=1 00:09:39.616 --rc genhtml_function_coverage=1 00:09:39.616 --rc genhtml_legend=1 00:09:39.616 --rc geninfo_all_blocks=1 00:09:39.616 --rc geninfo_unexecuted_blocks=1 00:09:39.616 00:09:39.616 ' 00:09:39.616 05:19:54 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:39.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.616 --rc genhtml_branch_coverage=1 00:09:39.616 --rc genhtml_function_coverage=1 00:09:39.616 --rc genhtml_legend=1 00:09:39.616 --rc geninfo_all_blocks=1 00:09:39.616 --rc geninfo_unexecuted_blocks=1 00:09:39.616 00:09:39.616 ' 00:09:39.616 05:19:54 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:39.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.616 --rc genhtml_branch_coverage=1 00:09:39.616 --rc genhtml_function_coverage=1 00:09:39.616 --rc genhtml_legend=1 00:09:39.616 --rc geninfo_all_blocks=1 00:09:39.616 --rc geninfo_unexecuted_blocks=1 00:09:39.616 00:09:39.616 ' 00:09:39.616 05:19:54 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:39.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.616 --rc genhtml_branch_coverage=1 00:09:39.616 --rc genhtml_function_coverage=1 00:09:39.616 --rc genhtml_legend=1 00:09:39.616 --rc geninfo_all_blocks=1 00:09:39.616 --rc geninfo_unexecuted_blocks=1 00:09:39.616 00:09:39.616 ' 00:09:39.616 05:19:54 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:39.616 05:19:54 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58542 00:09:39.617 05:19:54 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:39.617 05:19:54 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:39.617 05:19:54 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58542 00:09:39.617 05:19:54 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58542 ']' 00:09:39.617 05:19:54 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.617 05:19:54 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:39.617 05:19:54 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.617 05:19:54 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:39.617 05:19:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:39.617 [2024-11-20 05:19:54.096093] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:09:39.617 [2024-11-20 05:19:54.096464] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58542 ] 00:09:39.875 [2024-11-20 05:19:54.249545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:39.875 [2024-11-20 05:19:54.305081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.875 [2024-11-20 05:19:54.305146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.875 [2024-11-20 05:19:54.305243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:39.875 [2024-11-20 05:19:54.305261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.134 05:19:54 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:40.134 05:19:54 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:09:40.134 05:19:54 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:40.134 05:19:54 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.134 05:19:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:40.134 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:40.134 POWER: Cannot set governor of lcore 0 to userspace 00:09:40.134 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:40.134 POWER: Cannot set governor of lcore 0 to performance 00:09:40.134 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:40.134 POWER: Cannot set governor of lcore 0 to userspace 00:09:40.134 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:40.134 POWER: Cannot set governor of lcore 0 to userspace 00:09:40.134 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:09:40.134 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:40.134 POWER: Unable to set Power Management Environment for lcore 0 00:09:40.134 [2024-11-20 05:19:54.435614] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:09:40.134 [2024-11-20 05:19:54.435629] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:09:40.134 [2024-11-20 05:19:54.435639] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:09:40.134 [2024-11-20 05:19:54.435651] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:40.134 [2024-11-20 05:19:54.435659] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:40.134 [2024-11-20 05:19:54.435666] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:40.134 05:19:54 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.134 05:19:54 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:40.134 05:19:54 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.134 05:19:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:40.134 [2024-11-20 05:19:54.473107] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:40.134 [2024-11-20 05:19:54.494530] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:40.134 05:19:54 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.134 05:19:54 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:40.134 05:19:54 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:40.134 05:19:54 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:40.134 05:19:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:40.134 ************************************ 00:09:40.134 START TEST scheduler_create_thread 00:09:40.134 ************************************ 00:09:40.134 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:09:40.134 05:19:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:40.134 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.134 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:40.134 2 00:09:40.134 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.134 05:19:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:40.134 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:40.135 3 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:40.135 4 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:40.135 5 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:40.135 6 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:40.135 7 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:40.135 8 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:40.135 9 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:40.135 10 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.135 05:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:40.703 05:19:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.703 05:19:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:40.704 05:19:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:40.704 05:19:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.704 05:19:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.079 ************************************ 00:09:42.079 END TEST scheduler_create_thread 00:09:42.079 ************************************ 00:09:42.079 05:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.079 00:09:42.079 real 0m1.753s 00:09:42.079 user 0m0.019s 00:09:42.079 sys 0m0.002s 00:09:42.079 05:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:42.079 05:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.080 05:19:56 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:42.080 05:19:56 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58542 00:09:42.080 05:19:56 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58542 ']' 00:09:42.080 05:19:56 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58542 00:09:42.080 05:19:56 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:09:42.080 05:19:56 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:42.080 05:19:56 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58542 00:09:42.080 killing process with pid 58542 00:09:42.080 05:19:56 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:09:42.080 05:19:56 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:09:42.080 05:19:56 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58542' 00:09:42.080 05:19:56 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58542 00:09:42.080 05:19:56 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58542 00:09:42.338 [2024-11-20 05:19:56.737288] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:42.596 00:09:42.596 real 0m3.048s 00:09:42.596 user 0m4.101s 00:09:42.596 sys 0m0.290s 00:09:42.596 ************************************ 00:09:42.596 END TEST event_scheduler 00:09:42.596 ************************************ 00:09:42.596 05:19:56 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:42.596 05:19:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:42.596 05:19:56 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:42.596 05:19:56 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:42.596 05:19:56 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:42.596 05:19:56 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:42.596 05:19:56 event -- common/autotest_common.sh@10 -- # set +x 00:09:42.596 ************************************ 00:09:42.596 START TEST app_repeat 00:09:42.596 ************************************ 00:09:42.596 05:19:56 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:09:42.596 05:19:56 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.596 05:19:56 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:42.596 05:19:56 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:42.596 05:19:56 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:42.596 05:19:56 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:42.596 05:19:56 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:42.596 05:19:56 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:42.596 Process app_repeat pid: 58623 00:09:42.596 spdk_app_start Round 0 00:09:42.596 05:19:56 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58623 00:09:42.596 05:19:56 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:42.596 05:19:56 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:42.596 05:19:56 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58623' 00:09:42.596 05:19:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:42.596 05:19:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:42.596 05:19:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58623 /var/tmp/spdk-nbd.sock 00:09:42.596 05:19:56 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58623 ']' 00:09:42.596 05:19:56 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:42.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:42.596 05:19:56 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:42.596 05:19:56 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:42.596 05:19:56 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:42.596 05:19:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:42.596 [2024-11-20 05:19:56.957974] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:09:42.596 [2024-11-20 05:19:56.958304] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58623 ] 00:09:42.596 [2024-11-20 05:19:57.108314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:42.855 [2024-11-20 05:19:57.159147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.855 [2024-11-20 05:19:57.159167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.855 [2024-11-20 05:19:57.195264] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:42.855 05:19:57 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:42.855 05:19:57 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:09:42.855 05:19:57 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:43.422 Malloc0 00:09:43.422 05:19:57 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:43.681 Malloc1 00:09:43.681 05:19:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:43.681 05:19:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:43.681 05:19:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:43.681 05:19:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:43.681 05:19:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:43.681 05:19:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:43.681 05:19:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:43.681 05:19:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:43.681 05:19:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:43.681 05:19:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:43.681 05:19:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:43.681 05:19:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:43.681 05:19:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:43.681 05:19:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:43.681 05:19:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:43.681 05:19:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:43.940 /dev/nbd0 00:09:43.940 05:19:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:43.940 05:19:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:43.940 05:19:58 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:09:43.940 05:19:58 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:09:43.940 05:19:58 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:43.940 05:19:58 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:43.940 05:19:58 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:09:43.940 05:19:58 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:09:43.940 05:19:58 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:43.940 05:19:58 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:43.940 05:19:58 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:43.940 1+0 records in 00:09:43.940 1+0 records out 00:09:43.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274546 s, 14.9 MB/s 00:09:43.940 05:19:58 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:43.940 05:19:58 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:09:43.940 05:19:58 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:43.940 05:19:58 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:43.940 05:19:58 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:09:43.940 05:19:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:43.940 05:19:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:43.940 05:19:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:44.198 /dev/nbd1 00:09:44.198 05:19:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:44.198 05:19:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:44.198 05:19:58 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:09:44.198 05:19:58 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:09:44.198 05:19:58 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:44.198 05:19:58 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:44.198 05:19:58 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:09:44.198 05:19:58 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:09:44.198 05:19:58 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:44.198 05:19:58 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:44.198 05:19:58 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:44.198 1+0 records in 00:09:44.198 1+0 records out 00:09:44.198 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035229 s, 11.6 MB/s 00:09:44.198 05:19:58 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:44.457 05:19:58 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:09:44.457 05:19:58 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:44.457 05:19:58 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:44.457 05:19:58 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:09:44.457 05:19:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:44.457 05:19:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:44.457 05:19:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:44.457 05:19:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:44.457 05:19:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:44.715 05:19:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:44.715 { 00:09:44.715 "nbd_device": "/dev/nbd0", 00:09:44.715 "bdev_name": "Malloc0" 00:09:44.715 }, 00:09:44.715 { 00:09:44.715 "nbd_device": "/dev/nbd1", 00:09:44.715 "bdev_name": "Malloc1" 00:09:44.715 } 00:09:44.715 ]' 00:09:44.715 05:19:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:44.715 { 00:09:44.715 "nbd_device": "/dev/nbd0", 00:09:44.715 "bdev_name": "Malloc0" 00:09:44.715 }, 00:09:44.715 { 00:09:44.715 "nbd_device": "/dev/nbd1", 00:09:44.715 "bdev_name": "Malloc1" 00:09:44.715 } 00:09:44.715 ]' 00:09:44.715 05:19:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:44.715 05:19:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:44.715 /dev/nbd1' 00:09:44.715 05:19:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:44.715 /dev/nbd1' 00:09:44.715 05:19:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:44.715 05:19:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:44.715 05:19:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:44.715 05:19:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:44.715 05:19:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:44.715 05:19:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:44.715 05:19:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:44.715 05:19:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:44.715 05:19:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:44.715 05:19:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:44.715 05:19:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:44.715 05:19:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:44.715 256+0 records in 00:09:44.715 256+0 records out 00:09:44.715 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0077333 s, 136 MB/s 00:09:44.715 05:19:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:44.715 05:19:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:44.715 256+0 records in 00:09:44.715 256+0 records out 00:09:44.715 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269375 s, 38.9 MB/s 00:09:44.715 05:19:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:44.716 05:19:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:44.716 256+0 records in 00:09:44.716 256+0 records out 00:09:44.716 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0402494 s, 26.1 MB/s 00:09:44.716 05:19:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:44.716 05:19:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:44.716 05:19:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:44.716 05:19:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:44.716 05:19:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:44.716 05:19:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:44.716 05:19:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:44.716 05:19:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:44.716 05:19:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:44.716 05:19:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:44.716 05:19:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:44.716 05:19:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:44.716 05:19:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:44.716 05:19:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:44.716 05:19:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:44.716 05:19:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:44.716 05:19:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:44.716 05:19:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:44.716 05:19:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:45.283 05:19:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:45.283 05:19:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:45.283 05:19:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:45.283 05:19:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:45.283 05:19:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:45.283 05:19:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:45.283 05:19:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:45.283 05:19:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:45.283 05:19:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:45.283 05:19:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:45.541 05:20:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:45.541 05:20:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:45.814 05:20:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:45.814 05:20:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:45.814 05:20:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:45.814 05:20:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:45.814 05:20:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:45.814 05:20:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:45.814 05:20:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:45.814 05:20:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:45.814 05:20:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:46.086 05:20:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:46.086 05:20:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:46.086 05:20:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:46.086 05:20:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:46.086 05:20:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:46.086 05:20:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:46.086 05:20:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:46.086 05:20:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:46.086 05:20:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:46.086 05:20:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:46.086 05:20:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:46.086 05:20:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:46.086 05:20:00 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:46.345 05:20:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:46.345 [2024-11-20 05:20:00.780250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:46.345 [2024-11-20 05:20:00.816496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.345 [2024-11-20 05:20:00.816510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.345 [2024-11-20 05:20:00.846042] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:46.345 [2024-11-20 05:20:00.846148] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:46.345 [2024-11-20 05:20:00.846163] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:49.631 05:20:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:49.631 spdk_app_start Round 1 00:09:49.631 05:20:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:49.631 05:20:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58623 /var/tmp/spdk-nbd.sock 00:09:49.631 05:20:03 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58623 ']' 00:09:49.631 05:20:03 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:49.631 05:20:03 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:49.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:49.631 05:20:03 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:49.631 05:20:03 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:49.631 05:20:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:49.631 05:20:04 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:49.631 05:20:04 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:09:49.631 05:20:04 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:50.199 Malloc0 00:09:50.199 05:20:04 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:50.764 Malloc1 00:09:50.764 05:20:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:50.764 05:20:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:50.764 05:20:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:50.765 05:20:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:50.765 05:20:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:50.765 05:20:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:50.765 05:20:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:50.765 05:20:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:50.765 05:20:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:50.765 05:20:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:50.765 05:20:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:50.765 05:20:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:50.765 05:20:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:50.765 05:20:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:50.765 05:20:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:50.765 05:20:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:51.021 /dev/nbd0 00:09:51.021 05:20:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:51.021 05:20:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:51.021 05:20:05 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:09:51.021 05:20:05 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:09:51.021 05:20:05 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:51.021 05:20:05 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:51.021 05:20:05 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:09:51.021 05:20:05 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:09:51.021 05:20:05 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:51.021 05:20:05 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:51.021 05:20:05 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:51.021 1+0 records in 00:09:51.021 1+0 records out 00:09:51.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026559 s, 15.4 MB/s 00:09:51.021 05:20:05 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:51.021 05:20:05 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:09:51.021 05:20:05 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:51.021 05:20:05 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:51.021 05:20:05 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:09:51.021 05:20:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:51.021 05:20:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:51.021 05:20:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:51.279 /dev/nbd1 00:09:51.279 05:20:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:51.279 05:20:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:51.279 05:20:05 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:09:51.279 05:20:05 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:09:51.279 05:20:05 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:51.279 05:20:05 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:51.279 05:20:05 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:09:51.279 05:20:05 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:09:51.279 05:20:05 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:51.279 05:20:05 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:51.279 05:20:05 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:51.279 1+0 records in 00:09:51.279 1+0 records out 00:09:51.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321769 s, 12.7 MB/s 00:09:51.279 05:20:05 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:51.279 05:20:05 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:09:51.279 05:20:05 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:51.279 05:20:05 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:51.279 05:20:05 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:09:51.279 05:20:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:51.279 05:20:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:51.279 05:20:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:51.279 05:20:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:51.279 05:20:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:51.845 05:20:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:51.845 { 00:09:51.845 "nbd_device": "/dev/nbd0", 00:09:51.845 "bdev_name": "Malloc0" 00:09:51.845 }, 00:09:51.845 { 00:09:51.845 "nbd_device": "/dev/nbd1", 00:09:51.845 "bdev_name": "Malloc1" 00:09:51.846 } 00:09:51.846 ]' 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:51.846 { 00:09:51.846 "nbd_device": "/dev/nbd0", 00:09:51.846 "bdev_name": "Malloc0" 00:09:51.846 }, 00:09:51.846 { 00:09:51.846 "nbd_device": "/dev/nbd1", 00:09:51.846 "bdev_name": "Malloc1" 00:09:51.846 } 00:09:51.846 ]' 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:51.846 /dev/nbd1' 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:51.846 /dev/nbd1' 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:51.846 256+0 records in 00:09:51.846 256+0 records out 00:09:51.846 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00735728 s, 143 MB/s 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:51.846 256+0 records in 00:09:51.846 256+0 records out 00:09:51.846 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0348364 s, 30.1 MB/s 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:51.846 256+0 records in 00:09:51.846 256+0 records out 00:09:51.846 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298493 s, 35.1 MB/s 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:51.846 05:20:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:52.413 05:20:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:52.413 05:20:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:52.413 05:20:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:52.413 05:20:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:52.413 05:20:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:52.413 05:20:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:52.413 05:20:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:52.413 05:20:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:52.413 05:20:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:52.413 05:20:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:52.671 05:20:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:52.671 05:20:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:52.671 05:20:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:52.671 05:20:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:52.671 05:20:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:52.671 05:20:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:52.930 05:20:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:52.930 05:20:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:52.930 05:20:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:52.930 05:20:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.930 05:20:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:53.187 05:20:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:53.187 05:20:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:53.187 05:20:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:53.446 05:20:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:53.446 05:20:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:53.446 05:20:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:53.446 05:20:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:53.446 05:20:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:53.446 05:20:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:53.446 05:20:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:53.446 05:20:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:53.446 05:20:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:53.446 05:20:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:53.705 05:20:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:53.964 [2024-11-20 05:20:08.307999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:53.964 [2024-11-20 05:20:08.359886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.964 [2024-11-20 05:20:08.359965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.964 [2024-11-20 05:20:08.398654] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:53.964 [2024-11-20 05:20:08.398795] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:53.964 [2024-11-20 05:20:08.398818] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:57.275 spdk_app_start Round 2 00:09:57.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:57.275 05:20:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:57.275 05:20:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:57.275 05:20:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58623 /var/tmp/spdk-nbd.sock 00:09:57.275 05:20:11 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58623 ']' 00:09:57.275 05:20:11 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:57.276 05:20:11 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:57.276 05:20:11 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:57.276 05:20:11 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:57.276 05:20:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:57.276 05:20:11 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:57.276 05:20:11 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:09:57.276 05:20:11 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:57.869 Malloc0 00:09:57.869 05:20:12 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:58.127 Malloc1 00:09:58.127 05:20:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:58.127 05:20:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:58.127 05:20:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:58.127 05:20:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:58.127 05:20:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:58.127 05:20:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:58.127 05:20:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:58.127 05:20:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:58.127 05:20:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:58.127 05:20:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:58.127 05:20:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:58.127 05:20:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:58.127 05:20:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:58.127 05:20:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:58.127 05:20:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:58.127 05:20:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:58.694 /dev/nbd0 00:09:58.694 05:20:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:58.694 05:20:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:58.694 05:20:13 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:09:58.694 05:20:13 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:09:58.694 05:20:13 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:58.694 05:20:13 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:58.694 05:20:13 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:09:58.694 05:20:13 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:09:58.694 05:20:13 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:58.694 05:20:13 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:58.694 05:20:13 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:58.694 1+0 records in 00:09:58.694 1+0 records out 00:09:58.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571424 s, 7.2 MB/s 00:09:58.694 05:20:13 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:58.694 05:20:13 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:09:58.694 05:20:13 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:58.694 05:20:13 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:58.694 05:20:13 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:09:58.694 05:20:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:58.694 05:20:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:58.694 05:20:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:58.953 /dev/nbd1 00:09:59.211 05:20:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:59.211 05:20:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:59.211 05:20:13 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:09:59.211 05:20:13 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:09:59.211 05:20:13 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:59.211 05:20:13 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:59.211 05:20:13 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:09:59.211 05:20:13 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:09:59.211 05:20:13 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:59.211 05:20:13 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:59.211 05:20:13 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:59.211 1+0 records in 00:09:59.211 1+0 records out 00:09:59.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397319 s, 10.3 MB/s 00:09:59.211 05:20:13 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:59.211 05:20:13 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:09:59.211 05:20:13 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:59.211 05:20:13 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:59.211 05:20:13 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:09:59.211 05:20:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:59.211 05:20:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:59.211 05:20:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:59.211 05:20:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:59.211 05:20:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:59.470 05:20:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:59.470 { 00:09:59.470 "nbd_device": "/dev/nbd0", 00:09:59.470 "bdev_name": "Malloc0" 00:09:59.470 }, 00:09:59.470 { 00:09:59.470 "nbd_device": "/dev/nbd1", 00:09:59.470 "bdev_name": "Malloc1" 00:09:59.470 } 00:09:59.470 ]' 00:09:59.470 05:20:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:59.470 { 00:09:59.470 "nbd_device": "/dev/nbd0", 00:09:59.470 "bdev_name": "Malloc0" 00:09:59.470 }, 00:09:59.470 { 00:09:59.470 "nbd_device": "/dev/nbd1", 00:09:59.470 "bdev_name": "Malloc1" 00:09:59.470 } 00:09:59.470 ]' 00:09:59.470 05:20:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:59.470 05:20:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:59.470 /dev/nbd1' 00:09:59.470 05:20:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:59.470 /dev/nbd1' 00:09:59.470 05:20:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:59.470 05:20:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:59.470 05:20:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:59.470 05:20:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:59.470 05:20:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:59.470 05:20:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:59.470 05:20:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:59.470 05:20:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:59.470 05:20:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:59.470 05:20:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:59.470 05:20:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:59.470 05:20:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:59.470 256+0 records in 00:09:59.470 256+0 records out 00:09:59.470 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0048128 s, 218 MB/s 00:09:59.470 05:20:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:59.470 05:20:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:59.729 256+0 records in 00:09:59.729 256+0 records out 00:09:59.729 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0312888 s, 33.5 MB/s 00:09:59.729 05:20:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:59.729 05:20:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:59.729 256+0 records in 00:09:59.729 256+0 records out 00:09:59.729 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0348043 s, 30.1 MB/s 00:09:59.729 05:20:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:59.729 05:20:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:59.729 05:20:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:59.729 05:20:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:59.729 05:20:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:59.729 05:20:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:59.729 05:20:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:59.729 05:20:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:59.729 05:20:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:59.729 05:20:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:59.729 05:20:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:59.729 05:20:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:59.729 05:20:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:59.729 05:20:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:59.729 05:20:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:59.729 05:20:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:59.729 05:20:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:59.729 05:20:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:59.729 05:20:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:00.295 05:20:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:00.295 05:20:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:00.295 05:20:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:00.295 05:20:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:00.295 05:20:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:00.295 05:20:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:00.295 05:20:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:00.295 05:20:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:00.295 05:20:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:00.295 05:20:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:00.553 05:20:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:00.553 05:20:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:00.553 05:20:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:00.553 05:20:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:00.553 05:20:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:00.553 05:20:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:00.553 05:20:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:00.553 05:20:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:00.553 05:20:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:00.553 05:20:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:00.553 05:20:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:01.119 05:20:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:01.119 05:20:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:01.119 05:20:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:01.119 05:20:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:01.119 05:20:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:01.119 05:20:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:01.119 05:20:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:01.119 05:20:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:01.119 05:20:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:01.119 05:20:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:01.119 05:20:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:01.119 05:20:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:01.119 05:20:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:01.378 05:20:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:01.378 [2024-11-20 05:20:15.882569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:01.637 [2024-11-20 05:20:15.922675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.637 [2024-11-20 05:20:15.922688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.637 [2024-11-20 05:20:15.953156] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:01.637 [2024-11-20 05:20:15.953260] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:01.637 [2024-11-20 05:20:15.953276] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:04.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:04.934 05:20:18 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58623 /var/tmp/spdk-nbd.sock 00:10:04.934 05:20:18 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58623 ']' 00:10:04.934 05:20:18 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:04.934 05:20:18 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:04.934 05:20:18 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:04.934 05:20:18 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:04.934 05:20:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:04.934 05:20:19 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:04.934 05:20:19 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:10:04.934 05:20:19 event.app_repeat -- event/event.sh@39 -- # killprocess 58623 00:10:04.934 05:20:19 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58623 ']' 00:10:04.934 05:20:19 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58623 00:10:04.934 05:20:19 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:10:04.934 05:20:19 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:04.934 05:20:19 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58623 00:10:04.934 killing process with pid 58623 00:10:04.934 05:20:19 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:04.934 05:20:19 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:04.934 05:20:19 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58623' 00:10:04.934 05:20:19 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58623 00:10:04.934 05:20:19 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58623 00:10:04.934 spdk_app_start is called in Round 0. 00:10:04.934 Shutdown signal received, stop current app iteration 00:10:04.934 Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 reinitialization... 00:10:04.934 spdk_app_start is called in Round 1. 00:10:04.934 Shutdown signal received, stop current app iteration 00:10:04.934 Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 reinitialization... 00:10:04.934 spdk_app_start is called in Round 2. 00:10:04.934 Shutdown signal received, stop current app iteration 00:10:04.934 Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 reinitialization... 00:10:04.934 spdk_app_start is called in Round 3. 00:10:04.934 Shutdown signal received, stop current app iteration 00:10:04.934 05:20:19 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:04.934 05:20:19 event.app_repeat -- event/event.sh@42 -- # return 0 00:10:04.934 00:10:04.934 real 0m22.400s 00:10:04.934 user 0m52.444s 00:10:04.934 sys 0m3.530s 00:10:04.934 05:20:19 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:04.934 05:20:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:04.934 ************************************ 00:10:04.934 END TEST app_repeat 00:10:04.934 ************************************ 00:10:04.934 05:20:19 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:04.934 05:20:19 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:04.934 05:20:19 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:04.934 05:20:19 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:04.934 05:20:19 event -- common/autotest_common.sh@10 -- # set +x 00:10:04.934 ************************************ 00:10:04.934 START TEST cpu_locks 00:10:04.934 ************************************ 00:10:04.934 05:20:19 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:04.934 * Looking for test storage... 00:10:04.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:04.934 05:20:19 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:04.934 05:20:19 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:10:04.934 05:20:19 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:05.193 05:20:19 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:05.193 05:20:19 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.194 05:20:19 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.194 05:20:19 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.194 05:20:19 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.194 05:20:19 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.194 05:20:19 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.194 05:20:19 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.194 05:20:19 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.194 05:20:19 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.194 05:20:19 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.194 05:20:19 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.194 05:20:19 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:10:05.194 05:20:19 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:10:05.194 05:20:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.194 05:20:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.194 05:20:19 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:10:05.194 05:20:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:10:05.194 05:20:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.194 05:20:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:10:05.194 05:20:19 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.194 05:20:19 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:10:05.194 05:20:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:10:05.194 05:20:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.194 05:20:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:10:05.194 05:20:19 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.194 05:20:19 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.194 05:20:19 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.194 05:20:19 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:10:05.194 05:20:19 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.194 05:20:19 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:05.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.194 --rc genhtml_branch_coverage=1 00:10:05.194 --rc genhtml_function_coverage=1 00:10:05.194 --rc genhtml_legend=1 00:10:05.194 --rc geninfo_all_blocks=1 00:10:05.194 --rc geninfo_unexecuted_blocks=1 00:10:05.194 00:10:05.194 ' 00:10:05.194 05:20:19 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:05.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.194 --rc genhtml_branch_coverage=1 00:10:05.194 --rc genhtml_function_coverage=1 00:10:05.194 --rc genhtml_legend=1 00:10:05.194 --rc geninfo_all_blocks=1 00:10:05.194 --rc geninfo_unexecuted_blocks=1 00:10:05.194 00:10:05.194 ' 00:10:05.194 05:20:19 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:05.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.194 --rc genhtml_branch_coverage=1 00:10:05.194 --rc genhtml_function_coverage=1 00:10:05.194 --rc genhtml_legend=1 00:10:05.194 --rc geninfo_all_blocks=1 00:10:05.194 --rc geninfo_unexecuted_blocks=1 00:10:05.194 00:10:05.194 ' 00:10:05.194 05:20:19 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:05.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.194 --rc genhtml_branch_coverage=1 00:10:05.194 --rc genhtml_function_coverage=1 00:10:05.194 --rc genhtml_legend=1 00:10:05.194 --rc geninfo_all_blocks=1 00:10:05.194 --rc geninfo_unexecuted_blocks=1 00:10:05.194 00:10:05.194 ' 00:10:05.194 05:20:19 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:05.194 05:20:19 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:05.194 05:20:19 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:05.194 05:20:19 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:05.194 05:20:19 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:05.194 05:20:19 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:05.194 05:20:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:05.194 ************************************ 00:10:05.194 START TEST default_locks 00:10:05.194 ************************************ 00:10:05.194 05:20:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:10:05.194 05:20:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59095 00:10:05.194 05:20:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59095 00:10:05.194 05:20:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:05.194 05:20:19 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 59095 ']' 00:10:05.194 05:20:19 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.194 05:20:19 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:05.194 05:20:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.194 05:20:19 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:05.194 05:20:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:05.194 [2024-11-20 05:20:19.655633] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:05.194 [2024-11-20 05:20:19.655748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59095 ] 00:10:05.453 [2024-11-20 05:20:19.804408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.453 [2024-11-20 05:20:19.853441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.453 [2024-11-20 05:20:19.905752] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:05.712 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:05.712 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:10:05.712 05:20:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59095 00:10:05.712 05:20:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:05.712 05:20:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59095 00:10:06.278 05:20:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59095 00:10:06.278 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 59095 ']' 00:10:06.278 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 59095 00:10:06.278 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:10:06.278 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:06.278 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59095 00:10:06.278 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:06.278 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:06.278 killing process with pid 59095 00:10:06.278 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59095' 00:10:06.278 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 59095 00:10:06.278 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 59095 00:10:06.537 05:20:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59095 00:10:06.538 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:10:06.538 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59095 00:10:06.538 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:06.538 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:06.538 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:06.538 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:06.538 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 59095 00:10:06.538 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 59095 ']' 00:10:06.538 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.538 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:06.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.538 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.538 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:06.538 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:06.538 ERROR: process (pid: 59095) is no longer running 00:10:06.538 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59095) - No such process 00:10:06.538 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:06.538 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:10:06.538 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:10:06.538 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:06.538 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:06.538 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:06.538 05:20:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:10:06.538 05:20:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:06.538 05:20:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:10:06.538 05:20:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:06.538 00:10:06.538 real 0m1.280s 00:10:06.538 user 0m1.390s 00:10:06.538 sys 0m0.520s 00:10:06.538 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:06.538 ************************************ 00:10:06.538 END TEST default_locks 00:10:06.538 05:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:06.538 ************************************ 00:10:06.538 05:20:20 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:06.538 05:20:20 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:06.538 05:20:20 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:06.538 05:20:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:06.538 ************************************ 00:10:06.538 START TEST default_locks_via_rpc 00:10:06.538 ************************************ 00:10:06.538 05:20:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:10:06.538 05:20:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59139 00:10:06.538 05:20:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:06.538 05:20:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59139 00:10:06.538 05:20:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59139 ']' 00:10:06.538 05:20:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.538 05:20:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:06.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.538 05:20:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.538 05:20:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:06.538 05:20:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.538 [2024-11-20 05:20:20.991383] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:06.538 [2024-11-20 05:20:20.991535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59139 ] 00:10:06.797 [2024-11-20 05:20:21.142304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.797 [2024-11-20 05:20:21.192460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.797 [2024-11-20 05:20:21.244393] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:07.055 05:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:07.055 05:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:10:07.055 05:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:07.055 05:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.055 05:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.055 05:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.055 05:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:10:07.055 05:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:07.055 05:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:10:07.055 05:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:07.055 05:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:07.055 05:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.055 05:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.055 05:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.055 05:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59139 00:10:07.055 05:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59139 00:10:07.055 05:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:07.621 05:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59139 00:10:07.621 05:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 59139 ']' 00:10:07.621 05:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 59139 00:10:07.621 05:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:10:07.621 05:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:07.621 05:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59139 00:10:07.621 05:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:07.621 05:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:07.621 05:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59139' 00:10:07.621 killing process with pid 59139 00:10:07.621 05:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 59139 00:10:07.621 05:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 59139 00:10:07.881 00:10:07.881 real 0m1.287s 00:10:07.881 user 0m1.483s 00:10:07.881 sys 0m0.523s 00:10:07.881 05:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:07.881 05:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.881 ************************************ 00:10:07.881 END TEST default_locks_via_rpc 00:10:07.881 ************************************ 00:10:07.881 05:20:22 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:07.881 05:20:22 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:07.881 05:20:22 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:07.881 05:20:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:07.881 ************************************ 00:10:07.881 START TEST non_locking_app_on_locked_coremask 00:10:07.881 ************************************ 00:10:07.881 05:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:10:07.881 05:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59177 00:10:07.881 05:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:07.881 05:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59177 /var/tmp/spdk.sock 00:10:07.881 05:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59177 ']' 00:10:07.881 05:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.881 05:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:07.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.881 05:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.881 05:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:07.881 05:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:07.881 [2024-11-20 05:20:22.310175] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:07.881 [2024-11-20 05:20:22.310273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59177 ] 00:10:08.140 [2024-11-20 05:20:22.453313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.140 [2024-11-20 05:20:22.499987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.140 [2024-11-20 05:20:22.551505] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:08.398 05:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:08.398 05:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:10:08.398 05:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59186 00:10:08.398 05:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59186 /var/tmp/spdk2.sock 00:10:08.398 05:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:08.398 05:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59186 ']' 00:10:08.398 05:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:08.398 05:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:08.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:08.398 05:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:08.398 05:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:08.398 05:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:08.398 [2024-11-20 05:20:22.769616] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:08.399 [2024-11-20 05:20:22.769753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59186 ] 00:10:08.656 [2024-11-20 05:20:22.939411] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:08.657 [2024-11-20 05:20:22.939481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.657 [2024-11-20 05:20:23.017521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.657 [2024-11-20 05:20:23.099073] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:09.591 05:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:09.591 05:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:10:09.591 05:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59177 00:10:09.591 05:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59177 00:10:09.591 05:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:10.157 05:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59177 00:10:10.157 05:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59177 ']' 00:10:10.157 05:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59177 00:10:10.157 05:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:10:10.157 05:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:10.157 05:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59177 00:10:10.157 05:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:10.157 05:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:10.157 killing process with pid 59177 00:10:10.157 05:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59177' 00:10:10.157 05:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59177 00:10:10.157 05:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59177 00:10:10.725 05:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59186 00:10:10.725 05:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59186 ']' 00:10:10.725 05:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59186 00:10:10.725 05:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:10:10.725 05:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:10.725 05:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59186 00:10:10.725 05:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:10.725 05:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:10.725 killing process with pid 59186 00:10:10.725 05:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59186' 00:10:10.725 05:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59186 00:10:10.725 05:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59186 00:10:10.983 00:10:10.983 real 0m3.055s 00:10:10.983 user 0m3.597s 00:10:10.983 sys 0m0.874s 00:10:10.983 05:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:10.983 05:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:10.983 ************************************ 00:10:10.983 END TEST non_locking_app_on_locked_coremask 00:10:10.983 ************************************ 00:10:10.983 05:20:25 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:10.984 05:20:25 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:10.984 05:20:25 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:10.984 05:20:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:10.984 ************************************ 00:10:10.984 START TEST locking_app_on_unlocked_coremask 00:10:10.984 ************************************ 00:10:10.984 05:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:10:10.984 05:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59249 00:10:10.984 05:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:10.984 05:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59249 /var/tmp/spdk.sock 00:10:10.984 05:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59249 ']' 00:10:10.984 05:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.984 05:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:10.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.984 05:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.984 05:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:10.984 05:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:10.984 [2024-11-20 05:20:25.389015] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:10.984 [2024-11-20 05:20:25.389118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59249 ] 00:10:11.242 [2024-11-20 05:20:25.531511] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:11.242 [2024-11-20 05:20:25.531604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.242 [2024-11-20 05:20:25.574327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.242 [2024-11-20 05:20:25.616248] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:11.242 05:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:11.242 05:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:10:11.242 05:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59258 00:10:11.242 05:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:11.242 05:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59258 /var/tmp/spdk2.sock 00:10:11.242 05:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59258 ']' 00:10:11.242 05:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:11.242 05:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:11.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:11.242 05:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:11.242 05:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:11.242 05:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:11.499 [2024-11-20 05:20:25.829813] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:11.499 [2024-11-20 05:20:25.829974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59258 ] 00:10:11.499 [2024-11-20 05:20:25.997849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.757 [2024-11-20 05:20:26.069019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.757 [2024-11-20 05:20:26.150669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:12.015 05:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:12.015 05:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:10:12.015 05:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59258 00:10:12.015 05:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:12.015 05:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59258 00:10:12.950 05:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59249 00:10:12.950 05:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59249 ']' 00:10:12.950 05:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59249 00:10:12.950 05:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:10:12.950 05:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:12.950 05:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59249 00:10:12.950 05:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:12.950 05:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:12.950 05:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59249' 00:10:12.950 killing process with pid 59249 00:10:12.950 05:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59249 00:10:12.950 05:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59249 00:10:13.517 05:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59258 00:10:13.517 05:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59258 ']' 00:10:13.517 05:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59258 00:10:13.517 05:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:10:13.517 05:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:13.517 05:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59258 00:10:13.517 05:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:13.517 05:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:13.517 killing process with pid 59258 00:10:13.517 05:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59258' 00:10:13.517 05:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59258 00:10:13.517 05:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59258 00:10:13.775 00:10:13.775 real 0m2.769s 00:10:13.775 user 0m3.267s 00:10:13.775 sys 0m0.930s 00:10:13.775 05:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:13.775 05:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:13.775 ************************************ 00:10:13.775 END TEST locking_app_on_unlocked_coremask 00:10:13.775 ************************************ 00:10:13.775 05:20:28 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:13.775 05:20:28 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:13.775 05:20:28 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:13.775 05:20:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:13.775 ************************************ 00:10:13.775 START TEST locking_app_on_locked_coremask 00:10:13.775 ************************************ 00:10:13.775 05:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:10:13.775 05:20:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59312 00:10:13.775 05:20:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59312 /var/tmp/spdk.sock 00:10:13.775 05:20:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:13.775 05:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59312 ']' 00:10:13.775 05:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.775 05:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:13.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.775 05:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.775 05:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:13.775 05:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:13.775 [2024-11-20 05:20:28.229897] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:13.775 [2024-11-20 05:20:28.230057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59312 ] 00:10:14.034 [2024-11-20 05:20:28.382117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.034 [2024-11-20 05:20:28.429248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.034 [2024-11-20 05:20:28.479175] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:14.293 05:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:14.293 05:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:10:14.293 05:20:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59320 00:10:14.293 05:20:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59320 /var/tmp/spdk2.sock 00:10:14.293 05:20:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:14.293 05:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:10:14.293 05:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59320 /var/tmp/spdk2.sock 00:10:14.293 05:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:14.293 05:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:14.293 05:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:14.293 05:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:14.293 05:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59320 /var/tmp/spdk2.sock 00:10:14.293 05:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59320 ']' 00:10:14.293 05:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:14.293 05:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:14.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:14.293 05:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:14.293 05:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:14.293 05:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:14.293 [2024-11-20 05:20:28.661589] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:14.293 [2024-11-20 05:20:28.661682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59320 ] 00:10:14.552 [2024-11-20 05:20:28.820753] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59312 has claimed it. 00:10:14.552 [2024-11-20 05:20:28.820861] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:15.117 ERROR: process (pid: 59320) is no longer running 00:10:15.117 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59320) - No such process 00:10:15.118 05:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:15.118 05:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:10:15.118 05:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:10:15.118 05:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:15.118 05:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:15.118 05:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:15.118 05:20:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59312 00:10:15.118 05:20:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59312 00:10:15.118 05:20:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:15.375 05:20:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59312 00:10:15.375 05:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59312 ']' 00:10:15.375 05:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59312 00:10:15.375 05:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:10:15.375 05:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:15.375 05:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59312 00:10:15.375 05:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:15.375 05:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:15.375 killing process with pid 59312 00:10:15.375 05:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59312' 00:10:15.375 05:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59312 00:10:15.375 05:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59312 00:10:15.636 00:10:15.636 real 0m1.939s 00:10:15.636 user 0m2.296s 00:10:15.636 sys 0m0.514s 00:10:15.636 05:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:15.636 ************************************ 00:10:15.637 05:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:15.637 END TEST locking_app_on_locked_coremask 00:10:15.637 ************************************ 00:10:15.637 05:20:30 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:15.637 05:20:30 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:15.637 05:20:30 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:15.637 05:20:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:15.637 ************************************ 00:10:15.637 START TEST locking_overlapped_coremask 00:10:15.637 ************************************ 00:10:15.637 05:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:10:15.637 05:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59366 00:10:15.637 05:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59366 /var/tmp/spdk.sock 00:10:15.637 05:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:15.637 05:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59366 ']' 00:10:15.637 05:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.637 05:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:15.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.637 05:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.637 05:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:15.637 05:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:15.895 [2024-11-20 05:20:30.207828] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:15.895 [2024-11-20 05:20:30.207968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59366 ] 00:10:15.895 [2024-11-20 05:20:30.358064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:15.895 [2024-11-20 05:20:30.394106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.895 [2024-11-20 05:20:30.394229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.895 [2024-11-20 05:20:30.394237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.153 [2024-11-20 05:20:30.433780] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:16.153 05:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:16.153 05:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:10:16.153 05:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59376 00:10:16.153 05:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59376 /var/tmp/spdk2.sock 00:10:16.153 05:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:10:16.153 05:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59376 /var/tmp/spdk2.sock 00:10:16.153 05:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:16.153 05:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:16.153 05:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:16.153 05:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:16.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:16.153 05:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:16.153 05:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59376 /var/tmp/spdk2.sock 00:10:16.153 05:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59376 ']' 00:10:16.153 05:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:16.153 05:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:16.153 05:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:16.153 05:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:16.153 05:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:16.153 [2024-11-20 05:20:30.622634] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:16.153 [2024-11-20 05:20:30.622724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59376 ] 00:10:16.411 [2024-11-20 05:20:30.785366] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59366 has claimed it. 00:10:16.411 [2024-11-20 05:20:30.785438] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:16.979 ERROR: process (pid: 59376) is no longer running 00:10:16.979 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59376) - No such process 00:10:16.979 05:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:16.979 05:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:10:16.979 05:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:10:16.979 05:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:16.979 05:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:16.979 05:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:16.979 05:20:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:16.979 05:20:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:16.979 05:20:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:16.979 05:20:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:16.979 05:20:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59366 00:10:16.979 05:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 59366 ']' 00:10:16.979 05:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 59366 00:10:16.979 05:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:10:16.979 05:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:16.979 05:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59366 00:10:16.979 killing process with pid 59366 00:10:16.979 05:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:16.979 05:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:16.979 05:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59366' 00:10:16.979 05:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 59366 00:10:16.979 05:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 59366 00:10:17.238 ************************************ 00:10:17.238 END TEST locking_overlapped_coremask 00:10:17.238 ************************************ 00:10:17.238 00:10:17.238 real 0m1.512s 00:10:17.238 user 0m4.160s 00:10:17.238 sys 0m0.310s 00:10:17.238 05:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:17.238 05:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:17.238 05:20:31 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:17.238 05:20:31 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:17.238 05:20:31 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:17.238 05:20:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:17.238 ************************************ 00:10:17.238 START TEST locking_overlapped_coremask_via_rpc 00:10:17.238 ************************************ 00:10:17.238 05:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:10:17.238 05:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59416 00:10:17.238 05:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:17.238 05:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59416 /var/tmp/spdk.sock 00:10:17.238 05:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59416 ']' 00:10:17.238 05:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.238 05:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:17.238 05:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.238 05:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:17.238 05:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.498 [2024-11-20 05:20:31.751319] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:17.498 [2024-11-20 05:20:31.751401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59416 ] 00:10:17.498 [2024-11-20 05:20:31.897194] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:17.498 [2024-11-20 05:20:31.897242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:17.498 [2024-11-20 05:20:31.934605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.498 [2024-11-20 05:20:31.934699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.498 [2024-11-20 05:20:31.934705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.498 [2024-11-20 05:20:31.976182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:17.756 05:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:17.756 05:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:10:17.756 05:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59427 00:10:17.756 05:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59427 /var/tmp/spdk2.sock 00:10:17.756 05:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:17.756 05:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59427 ']' 00:10:17.757 05:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:17.757 05:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:17.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:17.757 05:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:17.757 05:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:17.757 05:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.757 [2024-11-20 05:20:32.169896] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:17.757 [2024-11-20 05:20:32.170010] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59427 ] 00:10:18.014 [2024-11-20 05:20:32.346332] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:18.014 [2024-11-20 05:20:32.346401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:18.014 [2024-11-20 05:20:32.421333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:18.014 [2024-11-20 05:20:32.425015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:18.014 [2024-11-20 05:20:32.425018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:18.014 [2024-11-20 05:20:32.513732] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:18.949 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:18.949 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:10:18.949 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:18.949 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.949 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.949 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.949 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:18.949 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:18.949 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:18.949 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:18.949 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:18.949 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:18.949 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:18.949 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:18.949 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.949 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.949 [2024-11-20 05:20:33.203127] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59416 has claimed it. 00:10:18.949 request: 00:10:18.949 { 00:10:18.949 "method": "framework_enable_cpumask_locks", 00:10:18.949 "req_id": 1 00:10:18.949 } 00:10:18.949 Got JSON-RPC error response 00:10:18.949 response: 00:10:18.949 { 00:10:18.949 "code": -32603, 00:10:18.949 "message": "Failed to claim CPU core: 2" 00:10:18.949 } 00:10:18.949 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:18.949 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:18.949 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:18.949 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:18.949 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:18.949 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59416 /var/tmp/spdk.sock 00:10:18.949 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59416 ']' 00:10:18.949 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.949 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:18.949 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.949 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:18.949 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.207 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:19.207 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:10:19.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:19.207 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59427 /var/tmp/spdk2.sock 00:10:19.207 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59427 ']' 00:10:19.207 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:19.207 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:19.207 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:19.207 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:19.207 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.485 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:19.485 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:10:19.485 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:19.485 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:19.485 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:19.485 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:19.485 00:10:19.485 real 0m2.149s 00:10:19.485 user 0m1.296s 00:10:19.485 sys 0m0.151s 00:10:19.485 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:19.485 05:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.485 ************************************ 00:10:19.485 END TEST locking_overlapped_coremask_via_rpc 00:10:19.485 ************************************ 00:10:19.485 05:20:33 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:10:19.485 05:20:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59416 ]] 00:10:19.485 05:20:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59416 00:10:19.485 05:20:33 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59416 ']' 00:10:19.485 05:20:33 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59416 00:10:19.485 05:20:33 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:10:19.485 05:20:33 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:19.485 05:20:33 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59416 00:10:19.485 killing process with pid 59416 00:10:19.485 05:20:33 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:19.485 05:20:33 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:19.485 05:20:33 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59416' 00:10:19.485 05:20:33 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59416 00:10:19.485 05:20:33 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59416 00:10:19.771 05:20:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59427 ]] 00:10:19.771 05:20:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59427 00:10:19.771 05:20:34 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59427 ']' 00:10:19.771 05:20:34 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59427 00:10:19.771 05:20:34 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:10:19.771 05:20:34 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:19.771 05:20:34 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59427 00:10:19.771 killing process with pid 59427 00:10:19.771 05:20:34 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:10:19.771 05:20:34 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:10:19.771 05:20:34 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59427' 00:10:19.771 05:20:34 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59427 00:10:19.771 05:20:34 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59427 00:10:20.029 05:20:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:20.029 05:20:34 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:20.029 Process with pid 59416 is not found 00:10:20.029 Process with pid 59427 is not found 00:10:20.029 05:20:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59416 ]] 00:10:20.029 05:20:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59416 00:10:20.029 05:20:34 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59416 ']' 00:10:20.029 05:20:34 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59416 00:10:20.029 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59416) - No such process 00:10:20.029 05:20:34 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59416 is not found' 00:10:20.029 05:20:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59427 ]] 00:10:20.029 05:20:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59427 00:10:20.029 05:20:34 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59427 ']' 00:10:20.029 05:20:34 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59427 00:10:20.029 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59427) - No such process 00:10:20.029 05:20:34 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59427 is not found' 00:10:20.029 05:20:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:20.029 00:10:20.029 real 0m15.166s 00:10:20.029 user 0m28.635s 00:10:20.029 sys 0m4.500s 00:10:20.029 05:20:34 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:20.029 05:20:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:20.029 ************************************ 00:10:20.029 END TEST cpu_locks 00:10:20.029 ************************************ 00:10:20.288 00:10:20.288 real 0m44.870s 00:10:20.288 user 1m31.693s 00:10:20.288 sys 0m8.670s 00:10:20.288 05:20:34 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:20.288 05:20:34 event -- common/autotest_common.sh@10 -- # set +x 00:10:20.288 ************************************ 00:10:20.288 END TEST event 00:10:20.288 ************************************ 00:10:20.288 05:20:34 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:20.288 05:20:34 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:20.288 05:20:34 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:20.288 05:20:34 -- common/autotest_common.sh@10 -- # set +x 00:10:20.288 ************************************ 00:10:20.288 START TEST thread 00:10:20.288 ************************************ 00:10:20.288 05:20:34 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:20.288 * Looking for test storage... 00:10:20.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:20.288 05:20:34 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:20.288 05:20:34 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:10:20.288 05:20:34 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:20.288 05:20:34 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:20.288 05:20:34 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.548 05:20:34 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.548 05:20:34 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.548 05:20:34 thread -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.548 05:20:34 thread -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.548 05:20:34 thread -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.548 05:20:34 thread -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.548 05:20:34 thread -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.548 05:20:34 thread -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.548 05:20:34 thread -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.548 05:20:34 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.548 05:20:34 thread -- scripts/common.sh@344 -- # case "$op" in 00:10:20.548 05:20:34 thread -- scripts/common.sh@345 -- # : 1 00:10:20.548 05:20:34 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.548 05:20:34 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.548 05:20:34 thread -- scripts/common.sh@365 -- # decimal 1 00:10:20.548 05:20:34 thread -- scripts/common.sh@353 -- # local d=1 00:10:20.548 05:20:34 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.548 05:20:34 thread -- scripts/common.sh@355 -- # echo 1 00:10:20.548 05:20:34 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.548 05:20:34 thread -- scripts/common.sh@366 -- # decimal 2 00:10:20.548 05:20:34 thread -- scripts/common.sh@353 -- # local d=2 00:10:20.548 05:20:34 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.548 05:20:34 thread -- scripts/common.sh@355 -- # echo 2 00:10:20.548 05:20:34 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.548 05:20:34 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.548 05:20:34 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.548 05:20:34 thread -- scripts/common.sh@368 -- # return 0 00:10:20.548 05:20:34 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.548 05:20:34 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:20.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.548 --rc genhtml_branch_coverage=1 00:10:20.548 --rc genhtml_function_coverage=1 00:10:20.548 --rc genhtml_legend=1 00:10:20.548 --rc geninfo_all_blocks=1 00:10:20.548 --rc geninfo_unexecuted_blocks=1 00:10:20.548 00:10:20.548 ' 00:10:20.548 05:20:34 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:20.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.548 --rc genhtml_branch_coverage=1 00:10:20.548 --rc genhtml_function_coverage=1 00:10:20.548 --rc genhtml_legend=1 00:10:20.548 --rc geninfo_all_blocks=1 00:10:20.548 --rc geninfo_unexecuted_blocks=1 00:10:20.548 00:10:20.548 ' 00:10:20.548 05:20:34 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:20.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.548 --rc genhtml_branch_coverage=1 00:10:20.548 --rc genhtml_function_coverage=1 00:10:20.548 --rc genhtml_legend=1 00:10:20.548 --rc geninfo_all_blocks=1 00:10:20.548 --rc geninfo_unexecuted_blocks=1 00:10:20.548 00:10:20.548 ' 00:10:20.548 05:20:34 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:20.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.548 --rc genhtml_branch_coverage=1 00:10:20.548 --rc genhtml_function_coverage=1 00:10:20.548 --rc genhtml_legend=1 00:10:20.548 --rc geninfo_all_blocks=1 00:10:20.548 --rc geninfo_unexecuted_blocks=1 00:10:20.548 00:10:20.548 ' 00:10:20.548 05:20:34 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:20.548 05:20:34 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:10:20.548 05:20:34 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:20.548 05:20:34 thread -- common/autotest_common.sh@10 -- # set +x 00:10:20.548 ************************************ 00:10:20.548 START TEST thread_poller_perf 00:10:20.548 ************************************ 00:10:20.548 05:20:34 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:20.548 [2024-11-20 05:20:34.837404] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:20.548 [2024-11-20 05:20:34.838017] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59565 ] 00:10:20.548 [2024-11-20 05:20:34.984581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.548 [2024-11-20 05:20:35.022703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.548 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:21.921 [2024-11-20T05:20:36.434Z] ====================================== 00:10:21.921 [2024-11-20T05:20:36.434Z] busy:2207788971 (cyc) 00:10:21.921 [2024-11-20T05:20:36.434Z] total_run_count: 301000 00:10:21.921 [2024-11-20T05:20:36.434Z] tsc_hz: 2200000000 (cyc) 00:10:21.921 [2024-11-20T05:20:36.434Z] ====================================== 00:10:21.921 [2024-11-20T05:20:36.434Z] poller_cost: 7334 (cyc), 3333 (nsec) 00:10:21.921 ************************************ 00:10:21.921 END TEST thread_poller_perf 00:10:21.921 ************************************ 00:10:21.921 00:10:21.921 real 0m1.256s 00:10:21.921 user 0m1.110s 00:10:21.921 sys 0m0.035s 00:10:21.921 05:20:36 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:21.921 05:20:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:21.921 05:20:36 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:21.921 05:20:36 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:10:21.921 05:20:36 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:21.921 05:20:36 thread -- common/autotest_common.sh@10 -- # set +x 00:10:21.921 ************************************ 00:10:21.921 START TEST thread_poller_perf 00:10:21.921 ************************************ 00:10:21.921 05:20:36 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:21.921 [2024-11-20 05:20:36.134206] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:21.921 [2024-11-20 05:20:36.134356] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59596 ] 00:10:21.921 [2024-11-20 05:20:36.289711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.921 [2024-11-20 05:20:36.338662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.921 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:23.295 [2024-11-20T05:20:37.808Z] ====================================== 00:10:23.295 [2024-11-20T05:20:37.808Z] busy:2202587412 (cyc) 00:10:23.295 [2024-11-20T05:20:37.808Z] total_run_count: 3892000 00:10:23.295 [2024-11-20T05:20:37.808Z] tsc_hz: 2200000000 (cyc) 00:10:23.295 [2024-11-20T05:20:37.808Z] ====================================== 00:10:23.295 [2024-11-20T05:20:37.808Z] poller_cost: 565 (cyc), 256 (nsec) 00:10:23.296 00:10:23.296 real 0m1.270s 00:10:23.296 user 0m1.124s 00:10:23.296 sys 0m0.035s 00:10:23.296 05:20:37 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:23.296 05:20:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:23.296 ************************************ 00:10:23.296 END TEST thread_poller_perf 00:10:23.296 ************************************ 00:10:23.296 05:20:37 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:23.296 00:10:23.296 real 0m2.805s 00:10:23.296 user 0m2.409s 00:10:23.296 sys 0m0.178s 00:10:23.296 05:20:37 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:23.296 05:20:37 thread -- common/autotest_common.sh@10 -- # set +x 00:10:23.296 ************************************ 00:10:23.296 END TEST thread 00:10:23.296 ************************************ 00:10:23.296 05:20:37 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:10:23.296 05:20:37 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:23.296 05:20:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:23.296 05:20:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:23.296 05:20:37 -- common/autotest_common.sh@10 -- # set +x 00:10:23.296 ************************************ 00:10:23.296 START TEST app_cmdline 00:10:23.296 ************************************ 00:10:23.296 05:20:37 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:23.296 * Looking for test storage... 00:10:23.296 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:23.296 05:20:37 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:23.296 05:20:37 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:10:23.296 05:20:37 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:23.296 05:20:37 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:23.296 05:20:37 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.296 05:20:37 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.296 05:20:37 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.296 05:20:37 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.296 05:20:37 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.296 05:20:37 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.296 05:20:37 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.296 05:20:37 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.296 05:20:37 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.296 05:20:37 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.296 05:20:37 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.296 05:20:37 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:10:23.296 05:20:37 app_cmdline -- scripts/common.sh@345 -- # : 1 00:10:23.296 05:20:37 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.296 05:20:37 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.296 05:20:37 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:10:23.296 05:20:37 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:10:23.296 05:20:37 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.296 05:20:37 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:10:23.296 05:20:37 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.296 05:20:37 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:10:23.296 05:20:37 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:10:23.296 05:20:37 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.296 05:20:37 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:10:23.296 05:20:37 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.296 05:20:37 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.296 05:20:37 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.296 05:20:37 app_cmdline -- scripts/common.sh@368 -- # return 0 00:10:23.296 05:20:37 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.296 05:20:37 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:23.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.296 --rc genhtml_branch_coverage=1 00:10:23.296 --rc genhtml_function_coverage=1 00:10:23.296 --rc genhtml_legend=1 00:10:23.296 --rc geninfo_all_blocks=1 00:10:23.296 --rc geninfo_unexecuted_blocks=1 00:10:23.296 00:10:23.296 ' 00:10:23.296 05:20:37 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:23.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.296 --rc genhtml_branch_coverage=1 00:10:23.296 --rc genhtml_function_coverage=1 00:10:23.296 --rc genhtml_legend=1 00:10:23.296 --rc geninfo_all_blocks=1 00:10:23.296 --rc geninfo_unexecuted_blocks=1 00:10:23.296 00:10:23.296 ' 00:10:23.296 05:20:37 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:23.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.296 --rc genhtml_branch_coverage=1 00:10:23.296 --rc genhtml_function_coverage=1 00:10:23.296 --rc genhtml_legend=1 00:10:23.296 --rc geninfo_all_blocks=1 00:10:23.296 --rc geninfo_unexecuted_blocks=1 00:10:23.296 00:10:23.296 ' 00:10:23.296 05:20:37 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:23.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.296 --rc genhtml_branch_coverage=1 00:10:23.296 --rc genhtml_function_coverage=1 00:10:23.296 --rc genhtml_legend=1 00:10:23.296 --rc geninfo_all_blocks=1 00:10:23.296 --rc geninfo_unexecuted_blocks=1 00:10:23.296 00:10:23.296 ' 00:10:23.296 05:20:37 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:23.296 05:20:37 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59674 00:10:23.296 05:20:37 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:23.296 05:20:37 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59674 00:10:23.296 05:20:37 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 59674 ']' 00:10:23.296 05:20:37 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.296 05:20:37 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:23.296 05:20:37 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.296 05:20:37 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:23.296 05:20:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:23.296 [2024-11-20 05:20:37.737650] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:23.296 [2024-11-20 05:20:37.738433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59674 ] 00:10:23.555 [2024-11-20 05:20:37.891297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.555 [2024-11-20 05:20:37.930075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.555 [2024-11-20 05:20:37.970573] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:24.491 05:20:38 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:24.491 05:20:38 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:10:24.491 05:20:38 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:10:24.750 { 00:10:24.750 "version": "SPDK v25.01-pre git sha1 866ba5ffe", 00:10:24.750 "fields": { 00:10:24.750 "major": 25, 00:10:24.750 "minor": 1, 00:10:24.750 "patch": 0, 00:10:24.750 "suffix": "-pre", 00:10:24.750 "commit": "866ba5ffe" 00:10:24.750 } 00:10:24.750 } 00:10:24.750 05:20:39 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:24.750 05:20:39 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:24.750 05:20:39 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:24.750 05:20:39 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:24.750 05:20:39 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:24.750 05:20:39 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:24.750 05:20:39 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:24.750 05:20:39 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.750 05:20:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:24.750 05:20:39 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.750 05:20:39 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:24.750 05:20:39 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:24.750 05:20:39 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:24.750 05:20:39 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:10:24.750 05:20:39 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:24.750 05:20:39 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:24.750 05:20:39 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:24.750 05:20:39 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:24.750 05:20:39 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:24.750 05:20:39 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:24.750 05:20:39 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:24.750 05:20:39 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:24.750 05:20:39 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:24.750 05:20:39 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:25.008 request: 00:10:25.009 { 00:10:25.009 "method": "env_dpdk_get_mem_stats", 00:10:25.009 "req_id": 1 00:10:25.009 } 00:10:25.009 Got JSON-RPC error response 00:10:25.009 response: 00:10:25.009 { 00:10:25.009 "code": -32601, 00:10:25.009 "message": "Method not found" 00:10:25.009 } 00:10:25.009 05:20:39 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:10:25.009 05:20:39 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:25.009 05:20:39 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:25.009 05:20:39 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:25.009 05:20:39 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59674 00:10:25.009 05:20:39 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 59674 ']' 00:10:25.009 05:20:39 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 59674 00:10:25.009 05:20:39 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:10:25.009 05:20:39 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:25.009 05:20:39 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59674 00:10:25.267 killing process with pid 59674 00:10:25.267 05:20:39 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:25.267 05:20:39 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:25.267 05:20:39 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59674' 00:10:25.267 05:20:39 app_cmdline -- common/autotest_common.sh@971 -- # kill 59674 00:10:25.267 05:20:39 app_cmdline -- common/autotest_common.sh@976 -- # wait 59674 00:10:25.526 ************************************ 00:10:25.526 END TEST app_cmdline 00:10:25.526 ************************************ 00:10:25.526 00:10:25.526 real 0m2.321s 00:10:25.526 user 0m3.145s 00:10:25.526 sys 0m0.415s 00:10:25.526 05:20:39 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:25.526 05:20:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:25.526 05:20:39 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:25.526 05:20:39 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:25.526 05:20:39 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:25.526 05:20:39 -- common/autotest_common.sh@10 -- # set +x 00:10:25.526 ************************************ 00:10:25.526 START TEST version 00:10:25.526 ************************************ 00:10:25.526 05:20:39 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:25.526 * Looking for test storage... 00:10:25.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:25.526 05:20:39 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:25.526 05:20:39 version -- common/autotest_common.sh@1691 -- # lcov --version 00:10:25.526 05:20:39 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:25.526 05:20:39 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:25.526 05:20:39 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:25.526 05:20:39 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:25.526 05:20:39 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:25.526 05:20:39 version -- scripts/common.sh@336 -- # IFS=.-: 00:10:25.526 05:20:39 version -- scripts/common.sh@336 -- # read -ra ver1 00:10:25.526 05:20:39 version -- scripts/common.sh@337 -- # IFS=.-: 00:10:25.526 05:20:39 version -- scripts/common.sh@337 -- # read -ra ver2 00:10:25.526 05:20:39 version -- scripts/common.sh@338 -- # local 'op=<' 00:10:25.526 05:20:39 version -- scripts/common.sh@340 -- # ver1_l=2 00:10:25.526 05:20:39 version -- scripts/common.sh@341 -- # ver2_l=1 00:10:25.526 05:20:39 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:25.526 05:20:39 version -- scripts/common.sh@344 -- # case "$op" in 00:10:25.526 05:20:39 version -- scripts/common.sh@345 -- # : 1 00:10:25.526 05:20:39 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:25.526 05:20:39 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:25.526 05:20:39 version -- scripts/common.sh@365 -- # decimal 1 00:10:25.526 05:20:39 version -- scripts/common.sh@353 -- # local d=1 00:10:25.526 05:20:39 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:25.526 05:20:39 version -- scripts/common.sh@355 -- # echo 1 00:10:25.526 05:20:39 version -- scripts/common.sh@365 -- # ver1[v]=1 00:10:25.526 05:20:39 version -- scripts/common.sh@366 -- # decimal 2 00:10:25.526 05:20:39 version -- scripts/common.sh@353 -- # local d=2 00:10:25.526 05:20:39 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:25.526 05:20:39 version -- scripts/common.sh@355 -- # echo 2 00:10:25.526 05:20:39 version -- scripts/common.sh@366 -- # ver2[v]=2 00:10:25.526 05:20:39 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:25.526 05:20:39 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:25.526 05:20:39 version -- scripts/common.sh@368 -- # return 0 00:10:25.526 05:20:39 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:25.526 05:20:39 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:25.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.526 --rc genhtml_branch_coverage=1 00:10:25.526 --rc genhtml_function_coverage=1 00:10:25.526 --rc genhtml_legend=1 00:10:25.526 --rc geninfo_all_blocks=1 00:10:25.526 --rc geninfo_unexecuted_blocks=1 00:10:25.526 00:10:25.526 ' 00:10:25.526 05:20:39 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:25.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.526 --rc genhtml_branch_coverage=1 00:10:25.526 --rc genhtml_function_coverage=1 00:10:25.526 --rc genhtml_legend=1 00:10:25.526 --rc geninfo_all_blocks=1 00:10:25.526 --rc geninfo_unexecuted_blocks=1 00:10:25.526 00:10:25.526 ' 00:10:25.526 05:20:39 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:25.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.526 --rc genhtml_branch_coverage=1 00:10:25.526 --rc genhtml_function_coverage=1 00:10:25.526 --rc genhtml_legend=1 00:10:25.526 --rc geninfo_all_blocks=1 00:10:25.526 --rc geninfo_unexecuted_blocks=1 00:10:25.526 00:10:25.526 ' 00:10:25.526 05:20:39 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:25.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.526 --rc genhtml_branch_coverage=1 00:10:25.526 --rc genhtml_function_coverage=1 00:10:25.526 --rc genhtml_legend=1 00:10:25.526 --rc geninfo_all_blocks=1 00:10:25.526 --rc geninfo_unexecuted_blocks=1 00:10:25.526 00:10:25.526 ' 00:10:25.526 05:20:39 version -- app/version.sh@17 -- # get_header_version major 00:10:25.526 05:20:39 version -- app/version.sh@14 -- # cut -f2 00:10:25.526 05:20:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:25.526 05:20:39 version -- app/version.sh@14 -- # tr -d '"' 00:10:25.526 05:20:39 version -- app/version.sh@17 -- # major=25 00:10:25.526 05:20:39 version -- app/version.sh@18 -- # get_header_version minor 00:10:25.526 05:20:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:25.526 05:20:39 version -- app/version.sh@14 -- # cut -f2 00:10:25.526 05:20:39 version -- app/version.sh@14 -- # tr -d '"' 00:10:25.526 05:20:39 version -- app/version.sh@18 -- # minor=1 00:10:25.526 05:20:39 version -- app/version.sh@19 -- # get_header_version patch 00:10:25.527 05:20:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:25.527 05:20:39 version -- app/version.sh@14 -- # cut -f2 00:10:25.527 05:20:39 version -- app/version.sh@14 -- # tr -d '"' 00:10:25.527 05:20:39 version -- app/version.sh@19 -- # patch=0 00:10:25.527 05:20:40 version -- app/version.sh@20 -- # get_header_version suffix 00:10:25.527 05:20:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:25.527 05:20:40 version -- app/version.sh@14 -- # cut -f2 00:10:25.527 05:20:40 version -- app/version.sh@14 -- # tr -d '"' 00:10:25.527 05:20:40 version -- app/version.sh@20 -- # suffix=-pre 00:10:25.527 05:20:40 version -- app/version.sh@22 -- # version=25.1 00:10:25.527 05:20:40 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:25.527 05:20:40 version -- app/version.sh@28 -- # version=25.1rc0 00:10:25.527 05:20:40 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:25.527 05:20:40 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:25.791 05:20:40 version -- app/version.sh@30 -- # py_version=25.1rc0 00:10:25.791 05:20:40 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:10:25.791 00:10:25.791 real 0m0.227s 00:10:25.791 user 0m0.154s 00:10:25.791 sys 0m0.104s 00:10:25.791 ************************************ 00:10:25.791 END TEST version 00:10:25.791 ************************************ 00:10:25.791 05:20:40 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:25.791 05:20:40 version -- common/autotest_common.sh@10 -- # set +x 00:10:25.791 05:20:40 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:10:25.791 05:20:40 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:10:25.791 05:20:40 -- spdk/autotest.sh@194 -- # uname -s 00:10:25.791 05:20:40 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:10:25.791 05:20:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:25.791 05:20:40 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:10:25.791 05:20:40 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:10:25.791 05:20:40 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:10:25.791 05:20:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:25.791 05:20:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:25.791 05:20:40 -- common/autotest_common.sh@10 -- # set +x 00:10:25.791 ************************************ 00:10:25.791 START TEST spdk_dd 00:10:25.791 ************************************ 00:10:25.791 05:20:40 spdk_dd -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:10:25.791 * Looking for test storage... 00:10:25.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:25.791 05:20:40 spdk_dd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:25.791 05:20:40 spdk_dd -- common/autotest_common.sh@1691 -- # lcov --version 00:10:25.791 05:20:40 spdk_dd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:25.792 05:20:40 spdk_dd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@345 -- # : 1 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@368 -- # return 0 00:10:25.792 05:20:40 spdk_dd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:25.792 05:20:40 spdk_dd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:25.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.792 --rc genhtml_branch_coverage=1 00:10:25.792 --rc genhtml_function_coverage=1 00:10:25.792 --rc genhtml_legend=1 00:10:25.792 --rc geninfo_all_blocks=1 00:10:25.792 --rc geninfo_unexecuted_blocks=1 00:10:25.792 00:10:25.792 ' 00:10:25.792 05:20:40 spdk_dd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:25.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.792 --rc genhtml_branch_coverage=1 00:10:25.792 --rc genhtml_function_coverage=1 00:10:25.792 --rc genhtml_legend=1 00:10:25.792 --rc geninfo_all_blocks=1 00:10:25.792 --rc geninfo_unexecuted_blocks=1 00:10:25.792 00:10:25.792 ' 00:10:25.792 05:20:40 spdk_dd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:25.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.792 --rc genhtml_branch_coverage=1 00:10:25.792 --rc genhtml_function_coverage=1 00:10:25.792 --rc genhtml_legend=1 00:10:25.792 --rc geninfo_all_blocks=1 00:10:25.792 --rc geninfo_unexecuted_blocks=1 00:10:25.792 00:10:25.792 ' 00:10:25.792 05:20:40 spdk_dd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:25.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.792 --rc genhtml_branch_coverage=1 00:10:25.792 --rc genhtml_function_coverage=1 00:10:25.792 --rc genhtml_legend=1 00:10:25.792 --rc geninfo_all_blocks=1 00:10:25.792 --rc geninfo_unexecuted_blocks=1 00:10:25.792 00:10:25.792 ' 00:10:25.792 05:20:40 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.792 05:20:40 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.792 05:20:40 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.792 05:20:40 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.792 05:20:40 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.792 05:20:40 spdk_dd -- paths/export.sh@5 -- # export PATH 00:10:25.792 05:20:40 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.792 05:20:40 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:26.088 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:26.088 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:26.088 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:26.348 05:20:40 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:10:26.348 05:20:40 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@233 -- # local class 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@235 -- # local progif 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@236 -- # class=01 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@18 -- # local i 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@27 -- # return 0 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@18 -- # local i 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@27 -- # return 0 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:10:26.348 05:20:40 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:10:26.348 05:20:40 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@139 -- # local lib 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:10:26.348 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:10:26.349 * spdk_dd linked to liburing 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:10:26.349 05:20:40 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:10:26.349 05:20:40 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:26.349 05:20:40 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:26.349 05:20:40 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:26.349 05:20:40 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:26.349 05:20:40 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:10:26.349 05:20:40 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:26.349 05:20:40 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:26.349 05:20:40 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:26.349 05:20:40 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:26.349 05:20:40 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:26.349 05:20:40 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:26.349 05:20:40 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:26.349 05:20:40 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:26.350 05:20:40 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:10:26.350 05:20:40 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:10:26.350 05:20:40 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:10:26.350 05:20:40 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:10:26.350 05:20:40 spdk_dd -- dd/common.sh@153 -- # return 0 00:10:26.350 05:20:40 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:10:26.350 05:20:40 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:10:26.350 05:20:40 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:26.350 05:20:40 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:26.350 05:20:40 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:26.350 ************************************ 00:10:26.350 START TEST spdk_dd_basic_rw 00:10:26.350 ************************************ 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:10:26.350 * Looking for test storage... 00:10:26.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # lcov --version 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:26.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.350 --rc genhtml_branch_coverage=1 00:10:26.350 --rc genhtml_function_coverage=1 00:10:26.350 --rc genhtml_legend=1 00:10:26.350 --rc geninfo_all_blocks=1 00:10:26.350 --rc geninfo_unexecuted_blocks=1 00:10:26.350 00:10:26.350 ' 00:10:26.350 05:20:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:26.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.350 --rc genhtml_branch_coverage=1 00:10:26.350 --rc genhtml_function_coverage=1 00:10:26.350 --rc genhtml_legend=1 00:10:26.350 --rc geninfo_all_blocks=1 00:10:26.350 --rc geninfo_unexecuted_blocks=1 00:10:26.351 00:10:26.351 ' 00:10:26.351 05:20:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:26.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.351 --rc genhtml_branch_coverage=1 00:10:26.351 --rc genhtml_function_coverage=1 00:10:26.351 --rc genhtml_legend=1 00:10:26.351 --rc geninfo_all_blocks=1 00:10:26.351 --rc geninfo_unexecuted_blocks=1 00:10:26.351 00:10:26.351 ' 00:10:26.351 05:20:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:26.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.351 --rc genhtml_branch_coverage=1 00:10:26.351 --rc genhtml_function_coverage=1 00:10:26.351 --rc genhtml_legend=1 00:10:26.351 --rc geninfo_all_blocks=1 00:10:26.351 --rc geninfo_unexecuted_blocks=1 00:10:26.351 00:10:26.351 ' 00:10:26.351 05:20:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:26.351 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:10:26.351 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:26.351 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:26.351 05:20:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:26.351 05:20:40 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.351 05:20:40 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.351 05:20:40 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.351 05:20:40 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:10:26.351 05:20:40 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.351 05:20:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:10:26.351 05:20:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:10:26.351 05:20:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:10:26.351 05:20:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:10:26.351 05:20:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:10:26.611 05:20:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:10:26.611 05:20:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:10:26.611 05:20:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:26.611 05:20:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:26.611 05:20:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:10:26.611 05:20:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:10:26.611 05:20:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:10:26.611 05:20:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:10:26.612 05:20:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:10:26.612 05:20:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:10:26.613 05:20:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:10:26.613 05:20:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:10:26.613 05:20:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:10:26.613 05:20:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:10:26.613 05:20:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:10:26.613 05:20:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:10:26.613 05:20:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:10:26.613 05:20:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:10:26.613 05:20:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:26.613 05:20:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:26.613 05:20:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:26.613 05:20:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:26.613 ************************************ 00:10:26.613 START TEST dd_bs_lt_native_bs 00:10:26.613 ************************************ 00:10:26.613 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1127 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:10:26.613 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:10:26.613 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:10:26.613 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:26.613 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:26.613 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:26.613 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:26.613 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:26.613 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:26.613 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:26.613 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:26.613 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:10:26.613 [2024-11-20 05:20:41.117615] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:26.613 [2024-11-20 05:20:41.117730] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60031 ] 00:10:26.872 { 00:10:26.872 "subsystems": [ 00:10:26.872 { 00:10:26.872 "subsystem": "bdev", 00:10:26.872 "config": [ 00:10:26.872 { 00:10:26.872 "params": { 00:10:26.872 "trtype": "pcie", 00:10:26.872 "traddr": "0000:00:10.0", 00:10:26.872 "name": "Nvme0" 00:10:26.872 }, 00:10:26.872 "method": "bdev_nvme_attach_controller" 00:10:26.872 }, 00:10:26.872 { 00:10:26.872 "method": "bdev_wait_for_examine" 00:10:26.872 } 00:10:26.872 ] 00:10:26.872 } 00:10:26.872 ] 00:10:26.872 } 00:10:26.872 [2024-11-20 05:20:41.261261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.872 [2024-11-20 05:20:41.300511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.872 [2024-11-20 05:20:41.331796] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:27.131 [2024-11-20 05:20:41.426238] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:10:27.131 [2024-11-20 05:20:41.426345] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:27.131 [2024-11-20 05:20:41.517039] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:27.131 00:10:27.131 real 0m0.522s 00:10:27.131 user 0m0.363s 00:10:27.131 sys 0m0.118s 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:10:27.131 ************************************ 00:10:27.131 END TEST dd_bs_lt_native_bs 00:10:27.131 ************************************ 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:27.131 ************************************ 00:10:27.131 START TEST dd_rw 00:10:27.131 ************************************ 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1127 -- # basic_rw 4096 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:10:27.131 05:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:28.066 05:20:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:10:28.066 05:20:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:10:28.066 05:20:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:28.066 05:20:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:28.066 [2024-11-20 05:20:42.395885] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:28.066 [2024-11-20 05:20:42.396260] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60062 ] 00:10:28.066 { 00:10:28.066 "subsystems": [ 00:10:28.066 { 00:10:28.066 "subsystem": "bdev", 00:10:28.066 "config": [ 00:10:28.066 { 00:10:28.066 "params": { 00:10:28.066 "trtype": "pcie", 00:10:28.066 "traddr": "0000:00:10.0", 00:10:28.066 "name": "Nvme0" 00:10:28.066 }, 00:10:28.066 "method": "bdev_nvme_attach_controller" 00:10:28.066 }, 00:10:28.066 { 00:10:28.066 "method": "bdev_wait_for_examine" 00:10:28.066 } 00:10:28.066 ] 00:10:28.066 } 00:10:28.066 ] 00:10:28.066 } 00:10:28.066 [2024-11-20 05:20:42.540140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.328 [2024-11-20 05:20:42.590281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.328 [2024-11-20 05:20:42.628474] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:28.328  [2024-11-20T05:20:43.099Z] Copying: 60/60 [kB] (average 58 MBps) 00:10:28.586 00:10:28.586 05:20:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:10:28.587 05:20:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:10:28.587 05:20:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:28.587 05:20:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:28.587 [2024-11-20 05:20:42.901539] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:28.587 [2024-11-20 05:20:42.901851] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60070 ] 00:10:28.587 { 00:10:28.587 "subsystems": [ 00:10:28.587 { 00:10:28.587 "subsystem": "bdev", 00:10:28.587 "config": [ 00:10:28.587 { 00:10:28.587 "params": { 00:10:28.587 "trtype": "pcie", 00:10:28.587 "traddr": "0000:00:10.0", 00:10:28.587 "name": "Nvme0" 00:10:28.587 }, 00:10:28.587 "method": "bdev_nvme_attach_controller" 00:10:28.587 }, 00:10:28.587 { 00:10:28.587 "method": "bdev_wait_for_examine" 00:10:28.587 } 00:10:28.587 ] 00:10:28.587 } 00:10:28.587 ] 00:10:28.587 } 00:10:28.587 [2024-11-20 05:20:43.047963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.587 [2024-11-20 05:20:43.081169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.845 [2024-11-20 05:20:43.110571] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:28.845  [2024-11-20T05:20:43.358Z] Copying: 60/60 [kB] (average 19 MBps) 00:10:28.845 00:10:28.845 05:20:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:28.845 05:20:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:10:28.845 05:20:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:28.845 05:20:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:28.845 05:20:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:10:28.845 05:20:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:28.846 05:20:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:10:28.846 05:20:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:10:28.846 05:20:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:28.846 05:20:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:28.846 05:20:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:29.104 { 00:10:29.104 "subsystems": [ 00:10:29.104 { 00:10:29.104 "subsystem": "bdev", 00:10:29.104 "config": [ 00:10:29.104 { 00:10:29.104 "params": { 00:10:29.104 "trtype": "pcie", 00:10:29.104 "traddr": "0000:00:10.0", 00:10:29.104 "name": "Nvme0" 00:10:29.104 }, 00:10:29.104 "method": "bdev_nvme_attach_controller" 00:10:29.104 }, 00:10:29.104 { 00:10:29.104 "method": "bdev_wait_for_examine" 00:10:29.104 } 00:10:29.104 ] 00:10:29.104 } 00:10:29.104 ] 00:10:29.104 } 00:10:29.104 [2024-11-20 05:20:43.415938] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:29.104 [2024-11-20 05:20:43.416317] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60091 ] 00:10:29.104 [2024-11-20 05:20:43.567977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.104 [2024-11-20 05:20:43.601634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.362 [2024-11-20 05:20:43.631351] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:29.362  [2024-11-20T05:20:43.875Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:10:29.362 00:10:29.362 05:20:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:10:29.362 05:20:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:10:29.362 05:20:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:10:29.362 05:20:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:10:29.362 05:20:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:10:29.362 05:20:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:10:29.362 05:20:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:30.297 05:20:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:10:30.297 05:20:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:10:30.297 05:20:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:30.297 05:20:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:30.297 { 00:10:30.297 "subsystems": [ 00:10:30.297 { 00:10:30.297 "subsystem": "bdev", 00:10:30.297 "config": [ 00:10:30.297 { 00:10:30.297 "params": { 00:10:30.297 "trtype": "pcie", 00:10:30.297 "traddr": "0000:00:10.0", 00:10:30.297 "name": "Nvme0" 00:10:30.297 }, 00:10:30.297 "method": "bdev_nvme_attach_controller" 00:10:30.297 }, 00:10:30.297 { 00:10:30.297 "method": "bdev_wait_for_examine" 00:10:30.297 } 00:10:30.297 ] 00:10:30.297 } 00:10:30.297 ] 00:10:30.297 } 00:10:30.297 [2024-11-20 05:20:44.622457] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:30.297 [2024-11-20 05:20:44.622796] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60110 ] 00:10:30.297 [2024-11-20 05:20:44.778341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.556 [2024-11-20 05:20:44.811493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.557 [2024-11-20 05:20:44.842524] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:30.557  [2024-11-20T05:20:45.070Z] Copying: 60/60 [kB] (average 58 MBps) 00:10:30.557 00:10:30.816 05:20:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:10:30.816 05:20:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:10:30.816 05:20:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:30.816 05:20:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:30.816 [2024-11-20 05:20:45.116993] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:30.816 [2024-11-20 05:20:45.117091] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60128 ] 00:10:30.816 { 00:10:30.816 "subsystems": [ 00:10:30.816 { 00:10:30.816 "subsystem": "bdev", 00:10:30.816 "config": [ 00:10:30.816 { 00:10:30.816 "params": { 00:10:30.816 "trtype": "pcie", 00:10:30.816 "traddr": "0000:00:10.0", 00:10:30.816 "name": "Nvme0" 00:10:30.816 }, 00:10:30.816 "method": "bdev_nvme_attach_controller" 00:10:30.816 }, 00:10:30.816 { 00:10:30.816 "method": "bdev_wait_for_examine" 00:10:30.816 } 00:10:30.816 ] 00:10:30.816 } 00:10:30.816 ] 00:10:30.816 } 00:10:30.816 [2024-11-20 05:20:45.260937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.816 [2024-11-20 05:20:45.294073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.816 [2024-11-20 05:20:45.324430] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:31.075  [2024-11-20T05:20:45.588Z] Copying: 60/60 [kB] (average 58 MBps) 00:10:31.075 00:10:31.075 05:20:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:31.075 05:20:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:10:31.075 05:20:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:31.075 05:20:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:31.075 05:20:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:10:31.075 05:20:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:31.075 05:20:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:10:31.075 05:20:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:31.075 05:20:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:10:31.075 05:20:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:31.075 05:20:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:31.366 [2024-11-20 05:20:45.613234] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:31.366 [2024-11-20 05:20:45.613345] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60139 ] 00:10:31.366 { 00:10:31.366 "subsystems": [ 00:10:31.366 { 00:10:31.366 "subsystem": "bdev", 00:10:31.366 "config": [ 00:10:31.366 { 00:10:31.366 "params": { 00:10:31.366 "trtype": "pcie", 00:10:31.366 "traddr": "0000:00:10.0", 00:10:31.366 "name": "Nvme0" 00:10:31.366 }, 00:10:31.366 "method": "bdev_nvme_attach_controller" 00:10:31.366 }, 00:10:31.366 { 00:10:31.366 "method": "bdev_wait_for_examine" 00:10:31.366 } 00:10:31.366 ] 00:10:31.366 } 00:10:31.366 ] 00:10:31.366 } 00:10:31.366 [2024-11-20 05:20:45.764165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.366 [2024-11-20 05:20:45.808408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.366 [2024-11-20 05:20:45.845334] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:31.624  [2024-11-20T05:20:46.137Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:10:31.624 00:10:31.624 05:20:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:10:31.624 05:20:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:10:31.624 05:20:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:10:31.624 05:20:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:10:31.624 05:20:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:10:31.624 05:20:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:10:31.624 05:20:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:10:31.624 05:20:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:32.560 05:20:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:10:32.560 05:20:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:10:32.560 05:20:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:32.560 05:20:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:32.560 [2024-11-20 05:20:46.773216] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:32.560 [2024-11-20 05:20:46.773504] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60158 ] 00:10:32.560 { 00:10:32.560 "subsystems": [ 00:10:32.560 { 00:10:32.560 "subsystem": "bdev", 00:10:32.560 "config": [ 00:10:32.560 { 00:10:32.560 "params": { 00:10:32.560 "trtype": "pcie", 00:10:32.560 "traddr": "0000:00:10.0", 00:10:32.560 "name": "Nvme0" 00:10:32.560 }, 00:10:32.560 "method": "bdev_nvme_attach_controller" 00:10:32.560 }, 00:10:32.560 { 00:10:32.560 "method": "bdev_wait_for_examine" 00:10:32.560 } 00:10:32.560 ] 00:10:32.560 } 00:10:32.560 ] 00:10:32.560 } 00:10:32.560 [2024-11-20 05:20:46.924036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.560 [2024-11-20 05:20:46.964704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.560 [2024-11-20 05:20:47.000138] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:32.819  [2024-11-20T05:20:47.332Z] Copying: 56/56 [kB] (average 54 MBps) 00:10:32.819 00:10:32.819 05:20:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:10:32.819 05:20:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:10:32.819 05:20:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:32.819 05:20:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:33.079 [2024-11-20 05:20:47.339211] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:33.079 [2024-11-20 05:20:47.339315] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60177 ] 00:10:33.079 { 00:10:33.079 "subsystems": [ 00:10:33.079 { 00:10:33.079 "subsystem": "bdev", 00:10:33.079 "config": [ 00:10:33.079 { 00:10:33.079 "params": { 00:10:33.079 "trtype": "pcie", 00:10:33.079 "traddr": "0000:00:10.0", 00:10:33.079 "name": "Nvme0" 00:10:33.079 }, 00:10:33.079 "method": "bdev_nvme_attach_controller" 00:10:33.079 }, 00:10:33.079 { 00:10:33.079 "method": "bdev_wait_for_examine" 00:10:33.079 } 00:10:33.079 ] 00:10:33.079 } 00:10:33.079 ] 00:10:33.079 } 00:10:33.079 [2024-11-20 05:20:47.492028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.079 [2024-11-20 05:20:47.524931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.079 [2024-11-20 05:20:47.555317] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:33.337  [2024-11-20T05:20:47.850Z] Copying: 56/56 [kB] (average 27 MBps) 00:10:33.337 00:10:33.337 05:20:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:33.337 05:20:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:10:33.337 05:20:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:33.337 05:20:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:33.337 05:20:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:10:33.337 05:20:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:33.337 05:20:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:10:33.337 05:20:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:33.337 05:20:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:10:33.337 05:20:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:33.337 05:20:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:33.596 [2024-11-20 05:20:47.849827] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:33.596 [2024-11-20 05:20:47.849949] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60187 ] 00:10:33.596 { 00:10:33.596 "subsystems": [ 00:10:33.596 { 00:10:33.596 "subsystem": "bdev", 00:10:33.596 "config": [ 00:10:33.596 { 00:10:33.596 "params": { 00:10:33.596 "trtype": "pcie", 00:10:33.596 "traddr": "0000:00:10.0", 00:10:33.596 "name": "Nvme0" 00:10:33.596 }, 00:10:33.596 "method": "bdev_nvme_attach_controller" 00:10:33.596 }, 00:10:33.596 { 00:10:33.596 "method": "bdev_wait_for_examine" 00:10:33.596 } 00:10:33.596 ] 00:10:33.596 } 00:10:33.596 ] 00:10:33.596 } 00:10:33.596 [2024-11-20 05:20:47.999429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.596 [2024-11-20 05:20:48.034140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.596 [2024-11-20 05:20:48.064963] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:33.854  [2024-11-20T05:20:48.367Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:10:33.854 00:10:33.854 05:20:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:10:33.854 05:20:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:10:33.854 05:20:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:10:33.854 05:20:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:10:33.854 05:20:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:10:33.854 05:20:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:10:33.854 05:20:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:34.421 05:20:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:10:34.421 05:20:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:10:34.421 05:20:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:34.421 05:20:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:34.680 { 00:10:34.680 "subsystems": [ 00:10:34.680 { 00:10:34.680 "subsystem": "bdev", 00:10:34.680 "config": [ 00:10:34.680 { 00:10:34.680 "params": { 00:10:34.680 "trtype": "pcie", 00:10:34.681 "traddr": "0000:00:10.0", 00:10:34.681 "name": "Nvme0" 00:10:34.681 }, 00:10:34.681 "method": "bdev_nvme_attach_controller" 00:10:34.681 }, 00:10:34.681 { 00:10:34.681 "method": "bdev_wait_for_examine" 00:10:34.681 } 00:10:34.681 ] 00:10:34.681 } 00:10:34.681 ] 00:10:34.681 } 00:10:34.681 [2024-11-20 05:20:48.980377] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:34.681 [2024-11-20 05:20:48.980621] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60206 ] 00:10:34.681 [2024-11-20 05:20:49.128521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.681 [2024-11-20 05:20:49.162563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.939 [2024-11-20 05:20:49.194450] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:34.939  [2024-11-20T05:20:49.452Z] Copying: 56/56 [kB] (average 54 MBps) 00:10:34.939 00:10:34.939 05:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:10:34.939 05:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:10:34.939 05:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:34.939 05:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:35.198 { 00:10:35.198 "subsystems": [ 00:10:35.198 { 00:10:35.198 "subsystem": "bdev", 00:10:35.198 "config": [ 00:10:35.198 { 00:10:35.198 "params": { 00:10:35.198 "trtype": "pcie", 00:10:35.198 "traddr": "0000:00:10.0", 00:10:35.198 "name": "Nvme0" 00:10:35.198 }, 00:10:35.198 "method": "bdev_nvme_attach_controller" 00:10:35.198 }, 00:10:35.198 { 00:10:35.198 "method": "bdev_wait_for_examine" 00:10:35.198 } 00:10:35.198 ] 00:10:35.198 } 00:10:35.198 ] 00:10:35.198 } 00:10:35.198 [2024-11-20 05:20:49.500767] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:35.198 [2024-11-20 05:20:49.500940] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60225 ] 00:10:35.198 [2024-11-20 05:20:49.657920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.198 [2024-11-20 05:20:49.692316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.456 [2024-11-20 05:20:49.723353] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:35.456  [2024-11-20T05:20:49.969Z] Copying: 56/56 [kB] (average 54 MBps) 00:10:35.456 00:10:35.456 05:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:35.457 05:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:10:35.457 05:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:35.457 05:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:35.457 05:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:10:35.457 05:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:35.457 05:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:10:35.457 05:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:10:35.457 05:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:35.457 05:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:35.457 05:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:35.717 [2024-11-20 05:20:50.011352] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:35.717 [2024-11-20 05:20:50.011716] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60235 ] 00:10:35.717 { 00:10:35.717 "subsystems": [ 00:10:35.717 { 00:10:35.717 "subsystem": "bdev", 00:10:35.717 "config": [ 00:10:35.717 { 00:10:35.717 "params": { 00:10:35.717 "trtype": "pcie", 00:10:35.717 "traddr": "0000:00:10.0", 00:10:35.717 "name": "Nvme0" 00:10:35.717 }, 00:10:35.717 "method": "bdev_nvme_attach_controller" 00:10:35.717 }, 00:10:35.717 { 00:10:35.717 "method": "bdev_wait_for_examine" 00:10:35.717 } 00:10:35.717 ] 00:10:35.717 } 00:10:35.717 ] 00:10:35.717 } 00:10:35.717 [2024-11-20 05:20:50.161193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.717 [2024-11-20 05:20:50.198778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.978 [2024-11-20 05:20:50.229587] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:35.978  [2024-11-20T05:20:50.491Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:10:35.978 00:10:35.978 05:20:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:10:35.978 05:20:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:10:35.978 05:20:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:10:35.978 05:20:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:10:35.978 05:20:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:10:35.978 05:20:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:10:35.978 05:20:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:10:35.978 05:20:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:36.545 05:20:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:10:36.545 05:20:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:10:36.545 05:20:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:36.545 05:20:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:36.804 [2024-11-20 05:20:51.064225] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:36.804 [2024-11-20 05:20:51.064579] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60254 ] 00:10:36.804 { 00:10:36.804 "subsystems": [ 00:10:36.804 { 00:10:36.804 "subsystem": "bdev", 00:10:36.804 "config": [ 00:10:36.804 { 00:10:36.804 "params": { 00:10:36.804 "trtype": "pcie", 00:10:36.804 "traddr": "0000:00:10.0", 00:10:36.804 "name": "Nvme0" 00:10:36.804 }, 00:10:36.804 "method": "bdev_nvme_attach_controller" 00:10:36.804 }, 00:10:36.804 { 00:10:36.804 "method": "bdev_wait_for_examine" 00:10:36.804 } 00:10:36.804 ] 00:10:36.804 } 00:10:36.804 ] 00:10:36.804 } 00:10:36.804 [2024-11-20 05:20:51.217799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.804 [2024-11-20 05:20:51.251469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.804 [2024-11-20 05:20:51.280993] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:37.063  [2024-11-20T05:20:51.576Z] Copying: 48/48 [kB] (average 46 MBps) 00:10:37.063 00:10:37.063 05:20:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:10:37.063 05:20:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:10:37.063 05:20:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:37.064 05:20:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:37.064 [2024-11-20 05:20:51.548790] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:37.064 [2024-11-20 05:20:51.549057] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60273 ] 00:10:37.064 { 00:10:37.064 "subsystems": [ 00:10:37.064 { 00:10:37.064 "subsystem": "bdev", 00:10:37.064 "config": [ 00:10:37.064 { 00:10:37.064 "params": { 00:10:37.064 "trtype": "pcie", 00:10:37.064 "traddr": "0000:00:10.0", 00:10:37.064 "name": "Nvme0" 00:10:37.064 }, 00:10:37.064 "method": "bdev_nvme_attach_controller" 00:10:37.064 }, 00:10:37.064 { 00:10:37.064 "method": "bdev_wait_for_examine" 00:10:37.064 } 00:10:37.064 ] 00:10:37.064 } 00:10:37.064 ] 00:10:37.064 } 00:10:37.323 [2024-11-20 05:20:51.694715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.323 [2024-11-20 05:20:51.729146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.323 [2024-11-20 05:20:51.760002] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:37.581  [2024-11-20T05:20:52.095Z] Copying: 48/48 [kB] (average 46 MBps) 00:10:37.582 00:10:37.582 05:20:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:37.582 05:20:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:10:37.582 05:20:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:37.582 05:20:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:37.582 05:20:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:10:37.582 05:20:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:37.582 05:20:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:10:37.582 05:20:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:37.582 05:20:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:10:37.582 05:20:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:37.582 05:20:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:37.582 [2024-11-20 05:20:52.044899] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:37.582 [2024-11-20 05:20:52.045237] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60283 ] 00:10:37.582 { 00:10:37.582 "subsystems": [ 00:10:37.582 { 00:10:37.582 "subsystem": "bdev", 00:10:37.582 "config": [ 00:10:37.582 { 00:10:37.582 "params": { 00:10:37.582 "trtype": "pcie", 00:10:37.582 "traddr": "0000:00:10.0", 00:10:37.582 "name": "Nvme0" 00:10:37.582 }, 00:10:37.582 "method": "bdev_nvme_attach_controller" 00:10:37.582 }, 00:10:37.582 { 00:10:37.582 "method": "bdev_wait_for_examine" 00:10:37.582 } 00:10:37.582 ] 00:10:37.582 } 00:10:37.582 ] 00:10:37.582 } 00:10:37.840 [2024-11-20 05:20:52.198570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.840 [2024-11-20 05:20:52.236203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.840 [2024-11-20 05:20:52.267112] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:38.099  [2024-11-20T05:20:52.612Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:10:38.099 00:10:38.099 05:20:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:10:38.099 05:20:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:10:38.099 05:20:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:10:38.099 05:20:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:10:38.099 05:20:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:10:38.099 05:20:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:10:38.099 05:20:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:38.667 05:20:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:10:38.667 05:20:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:10:38.667 05:20:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:38.667 05:20:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:38.667 [2024-11-20 05:20:53.085048] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:38.667 [2024-11-20 05:20:53.085170] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60302 ] 00:10:38.667 { 00:10:38.667 "subsystems": [ 00:10:38.667 { 00:10:38.667 "subsystem": "bdev", 00:10:38.667 "config": [ 00:10:38.667 { 00:10:38.667 "params": { 00:10:38.667 "trtype": "pcie", 00:10:38.667 "traddr": "0000:00:10.0", 00:10:38.667 "name": "Nvme0" 00:10:38.667 }, 00:10:38.667 "method": "bdev_nvme_attach_controller" 00:10:38.667 }, 00:10:38.667 { 00:10:38.667 "method": "bdev_wait_for_examine" 00:10:38.667 } 00:10:38.667 ] 00:10:38.667 } 00:10:38.667 ] 00:10:38.667 } 00:10:38.927 [2024-11-20 05:20:53.235829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.927 [2024-11-20 05:20:53.270677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.927 [2024-11-20 05:20:53.302070] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:38.927  [2024-11-20T05:20:53.700Z] Copying: 48/48 [kB] (average 46 MBps) 00:10:39.187 00:10:39.187 05:20:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:10:39.187 05:20:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:10:39.187 05:20:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:39.187 05:20:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:39.187 [2024-11-20 05:20:53.609446] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:39.187 [2024-11-20 05:20:53.609582] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60321 ] 00:10:39.187 { 00:10:39.187 "subsystems": [ 00:10:39.187 { 00:10:39.187 "subsystem": "bdev", 00:10:39.187 "config": [ 00:10:39.187 { 00:10:39.187 "params": { 00:10:39.187 "trtype": "pcie", 00:10:39.187 "traddr": "0000:00:10.0", 00:10:39.187 "name": "Nvme0" 00:10:39.187 }, 00:10:39.187 "method": "bdev_nvme_attach_controller" 00:10:39.187 }, 00:10:39.187 { 00:10:39.187 "method": "bdev_wait_for_examine" 00:10:39.187 } 00:10:39.187 ] 00:10:39.187 } 00:10:39.187 ] 00:10:39.187 } 00:10:39.445 [2024-11-20 05:20:53.761341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.445 [2024-11-20 05:20:53.795493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.445 [2024-11-20 05:20:53.827139] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:39.445  [2024-11-20T05:20:54.216Z] Copying: 48/48 [kB] (average 46 MBps) 00:10:39.703 00:10:39.703 05:20:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:39.703 05:20:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:10:39.703 05:20:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:39.703 05:20:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:39.703 05:20:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:10:39.703 05:20:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:39.703 05:20:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:10:39.703 05:20:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:39.703 05:20:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:10:39.703 05:20:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:39.703 05:20:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:39.703 { 00:10:39.703 "subsystems": [ 00:10:39.703 { 00:10:39.703 "subsystem": "bdev", 00:10:39.703 "config": [ 00:10:39.703 { 00:10:39.703 "params": { 00:10:39.703 "trtype": "pcie", 00:10:39.703 "traddr": "0000:00:10.0", 00:10:39.703 "name": "Nvme0" 00:10:39.703 }, 00:10:39.703 "method": "bdev_nvme_attach_controller" 00:10:39.703 }, 00:10:39.703 { 00:10:39.703 "method": "bdev_wait_for_examine" 00:10:39.703 } 00:10:39.703 ] 00:10:39.703 } 00:10:39.703 ] 00:10:39.703 } 00:10:39.703 [2024-11-20 05:20:54.118188] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:39.703 [2024-11-20 05:20:54.118291] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60331 ] 00:10:39.962 [2024-11-20 05:20:54.270298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.962 [2024-11-20 05:20:54.311665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.962 [2024-11-20 05:20:54.347665] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:39.962  [2024-11-20T05:20:54.734Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:10:40.221 00:10:40.221 00:10:40.221 real 0m12.966s 00:10:40.221 user 0m9.880s 00:10:40.221 sys 0m3.803s 00:10:40.221 05:20:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:40.221 05:20:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:40.221 ************************************ 00:10:40.221 END TEST dd_rw 00:10:40.221 ************************************ 00:10:40.221 05:20:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:10:40.221 05:20:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:40.221 05:20:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:40.221 05:20:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:40.221 ************************************ 00:10:40.221 START TEST dd_rw_offset 00:10:40.221 ************************************ 00:10:40.221 05:20:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1127 -- # basic_offset 00:10:40.221 05:20:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:10:40.221 05:20:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:10:40.221 05:20:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:10:40.221 05:20:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:10:40.221 05:20:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:10:40.222 05:20:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=wcbty8l1tv9z2le4rpbxu9yuec2y8owhw5l4v0thhiyx5fpoj0b7g1w4xkho7wouibs3z88rtizf7g0229ko83zzgvll1188pcnfwy4et0km6go4punc96k31a1pfaai1g2lt62oxe1usw8nrmy0z0g1m9tqohg10hn78h9kx4dokoxy34ujplgijczv43ukh3hdakjp4uiysrtiow3szc6auv15zrs5hmxe26auar18p5cvnglv0sr4vl82ux88kkyjrujgizh3za9bw14c0xdq3weh6acb6lgegxfhwqtak22q76byar85qi8zo0h3bgufjsnl0cy1i96eolxzhwczw4hnozjebbg2s5kbq1nsp5spepjr21ncqodhvyn8kr98j8ddrtdar4fen2tdy1rzxiv9gfcyqdy3bf0fzougkv11fgq3ndp0fut7mgl3l6xaulgoer8vszvbc9oo0djvec7cxyh2p2voytowfzsgfpugehpen7gqn0k5b7ffrwhv6jniz5759v9zoryy80v3gdm2jkryw70rt3e17zczy3cgh0mvic5qi8tagtpzm2dvxjay2tt9gp42vnnm8f3yz0uicwsmp4drxn297dw47ucbbccf9966b0usmikomxyvxmsb0fjzktuxv9muvtxe0ab0sy7entjok8hm2b5phtd3bdimzjq42u621twsq71kq6s7d2i6lsua5qon4s8rt2eh775i6wugja6twnozu7iasvl43ce6vtsdmq1lcue6j57sr68d03mhwhn44vapsi755iz7j0zaqtbr33x3f0siyfvvn07jcax0mb96zuvugav1kzpduojtfinhu0mqr16c3gijhau42kyjsihdkcee4x7ag6euswg0ryy5i4ayjka245fn7z6i7wxxthq0p7gtv6f9j4s3ygf4lw9jzs18i1j1phudwyw3gwcoraanfxz8vljlsruwe4em2um2az2bkokxajf2zl5lpb8bagwpxav6y9fmxk8jq2nfmhl1g7oubfi0zx282mjr95lt20f8e7dqasvvglpgp8u3x5c540n5gyiebp1iz40zc15wkwjevmg38w3yy4928pm8ncitnvnn5ahcr4g1eg8fmu06eiedtoje5xgszq7y90bot5htfy4mrlkv7ny277xc4zgma2djn7tslc7imx23gtb2tgqktyonia34102yinb5nmmvn3tw86ufkzgz9vhzg2mrbz8q0hcj2pldw2dytmvu1p0gtxrqcf2mp70ieryh3p338cmoknjds6rz07v639z21nnjx04wt6sz9acgmftpwjhk1882ua3oewr4lkpt1q04dzv950qw9o0746ogg7tfobxhgffwewc8gblk7c83qzgrd8w4w9xa7ccip92xrrzsow9u55hfnj1oh00byt6izym6q94v5bg3rlz4ev5zatfvywgxbkwen82kkwp4y94qpf502pt0znp01f723vqytgjuuv7f6dqa5ucii2lal0nvesblfcjlhivc3wruei1pn2fxn75mhho9e1mxrpklsui78ez6eu0e0fi44g0hx1y58yzyhaa8udquz8wlcz7qvg765cfjmum32ezzbhkf0s98uyvhh82if2a76aaab7bfngtisbm9seust4rtvoinhwm47dv7v7cjzexam9wo55owfizvzh5kgtg6b5ec3linoxt5mvc3177ht2viv3hyhgpns2bha320eqiay5dvne6vnmnrdsa8ujvlkq9cuytxx1z06gus8i8ku63ct8re452ld0gib0vryiued071u6h28rz2u4277cnuu592d6oiaweh0f7pr40qlofqrx6xu697bx46pkegxs7dmtmd54bozaxyoo1ut90tor527zl1kcdclzh1mcezb1eqmi8kekfvka6gv2gv2pt7mzae7ptp68g9t86q4kf7e140swuzof0k7l4salfmwnblwizmgkp0ib3eico5299tydj4tijg2c5zf7t9jljhrf7yt2z78ask2hml9yxae1ndo1zq0thnb0hyxz03h1wkjd0xl3dcccuyu6ty3v627g59oqf8onyx3ts4czxvgn42honrv4tzm250d8xtisynp1rz6v9x19vi1f5imbucg1uxbt5py3gz7dt916b9g9t2w02w28brqranexmvcsfjbhcdnufag8pm1s74qf7xhxwmm7antot3p08akh7mu17je9kfrgi6fgwhz4j0oh9oojokjksh7hc8ci182z7t3unieln1kgw17c8pqsauqh3t7o9lz79w49gecar4s0koa4lpcwydc7ct7k847ty4edi68d29hz437upvr9dm4hikxbixd2xkap74gcoa132o379x2zja0s1tnkeakptr1mi0kbe8inf3jf000rh2lc1mfect49sdw97rbx28t22750d9nn4q13wxeh58licaadkyx131jy6aupgvk08zlim27k7d49c3spozgyahtwdethn002al2p43ppz66yy7700orpxah5vgvjwqkqcnm5cenydscanslwio07t19pig1mzsi4xwzei52z9hiwd7zlp6x8nb9kr8v6zzw7ujiym8u6lp9sxy35xkqas8zxe58htbpx02dyjqai6h1e4035q7e62plgx6zjak21t1j0p224ukkm686jfthz3tk7wy2ymg4i15zr35cbtz815kyj4cswig9ww9vfo9vfyvixs4txrwt63d0l8sc7o01mib67b4r7d6s34zlkdg2gmf13wmyao6vign72mock33q51gvy4odtrvhf17v31akwhntex6l0a5hp1x8xca5vl0l0m9nzdx00wsbxiyin82d8ljgckt531jbsml3mg3ddcjfvpfv497hlc8uw1gn7ho6ta8xksng598t5krili4ux4tqemltyt1kbg7pol0w1l7y221o1hdsah16jpe6jxrewhrdxaecmr9aoxzur1cg3chb7mae1ii2to074k86a9a9k5sygsap3hbkeqh85aawcsflcxwij62xivbfh9c8zbsyi2oz9zlhv1o9g3k4if1wiaqeum37yli48wi8zhm9vfo858pi8o15a8fd2e2scts1rpjsik9gjolb9agwcf6mbhgavif92nv3rhepd1zt9qv1vll1bop5ymzdmp375w0o08uudwkfkwvg72m7omm5goo4ghvcvjyy93bpnxfkgb3g0a12hm8abqmfr5hignrb4peg4zunpfn6wxnl6ocbbot76v1zbaazdqdl3i4mkbl07tlutu86m7khee7pfrndonoj6p1yo92y1mlk1y6yr2e2h8jvqm8qafwabdt6jwxs4vbref3z4dnpnezwnmnq8lsz9eet8obh84p16tbh3eml3qme66lslsoq89xrgzv0drg4vtwwi20j0f1jzsbo9w64tqmzqvnm25wlt5w750mdmu0g2nny64n4u6kqehff4016ldjnbv28wnz2oh1w8krppceoy3wt8dj3unvrhnpj7xo8rqat1wxnk3v6zv4a5vcnbcpfi4oi4koeztjj59uv3wjttdm4k0whz6o19291u0mbxv1yvvxme8awl1kvp0ullvq7g2twojbvk6eih94kklrsiqwil9iut8aiaatkoxevya35yxgmhr3uyd7uo7urdb9exkkspjvzh346h5np3oajegq03vb144icqd4lszhsnig6egcmmdzy6cem5r8mva0xskk6bqlro6aihmy8bgtc6heeq1ly7qixxpto3mwww5g5bhv7m1prugpvja1xb5e934ty1b5cfs6qghhh976ejlgoadurviztfsfu4ycilj7t12e394kgtkyp2aa3d88hnqdd3tlp508ptbcttebzplchmmpil7pn8r2dvb26jq2zmucd934grhh4fkhrmr1bnxo020z1fppyr2wty01rs94qok0k6idpjwjpgnb89cn88qekgzuf6yyb30vnwdm5e051duwpsbx3khcujint63az6icqynnfhutjkuzf8fpjp04iu3hl6m0w7evu4cfglap2seo 00:10:40.222 05:20:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:10:40.222 05:20:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:10:40.222 05:20:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:10:40.222 05:20:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:10:40.481 { 00:10:40.481 "subsystems": [ 00:10:40.481 { 00:10:40.481 "subsystem": "bdev", 00:10:40.481 "config": [ 00:10:40.481 { 00:10:40.481 "params": { 00:10:40.481 "trtype": "pcie", 00:10:40.481 "traddr": "0000:00:10.0", 00:10:40.481 "name": "Nvme0" 00:10:40.481 }, 00:10:40.481 "method": "bdev_nvme_attach_controller" 00:10:40.481 }, 00:10:40.481 { 00:10:40.481 "method": "bdev_wait_for_examine" 00:10:40.481 } 00:10:40.481 ] 00:10:40.481 } 00:10:40.481 ] 00:10:40.481 } 00:10:40.481 [2024-11-20 05:20:54.773873] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:40.481 [2024-11-20 05:20:54.774122] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60367 ] 00:10:40.481 [2024-11-20 05:20:54.918413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.481 [2024-11-20 05:20:54.953486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.481 [2024-11-20 05:20:54.984400] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:40.739  [2024-11-20T05:20:55.252Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:10:40.739 00:10:40.739 05:20:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:10:40.739 05:20:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:10:40.739 05:20:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:10:40.740 05:20:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:10:40.998 { 00:10:40.998 "subsystems": [ 00:10:40.998 { 00:10:40.998 "subsystem": "bdev", 00:10:40.998 "config": [ 00:10:40.998 { 00:10:40.998 "params": { 00:10:40.998 "trtype": "pcie", 00:10:40.998 "traddr": "0000:00:10.0", 00:10:40.998 "name": "Nvme0" 00:10:40.998 }, 00:10:40.998 "method": "bdev_nvme_attach_controller" 00:10:40.998 }, 00:10:40.998 { 00:10:40.998 "method": "bdev_wait_for_examine" 00:10:40.998 } 00:10:40.998 ] 00:10:40.998 } 00:10:40.998 ] 00:10:40.998 } 00:10:40.998 [2024-11-20 05:20:55.268643] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:40.998 [2024-11-20 05:20:55.268890] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60381 ] 00:10:40.998 [2024-11-20 05:20:55.417181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.998 [2024-11-20 05:20:55.452322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.998 [2024-11-20 05:20:55.483701] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:41.258  [2024-11-20T05:20:55.771Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:10:41.258 00:10:41.258 05:20:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:10:41.259 05:20:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ wcbty8l1tv9z2le4rpbxu9yuec2y8owhw5l4v0thhiyx5fpoj0b7g1w4xkho7wouibs3z88rtizf7g0229ko83zzgvll1188pcnfwy4et0km6go4punc96k31a1pfaai1g2lt62oxe1usw8nrmy0z0g1m9tqohg10hn78h9kx4dokoxy34ujplgijczv43ukh3hdakjp4uiysrtiow3szc6auv15zrs5hmxe26auar18p5cvnglv0sr4vl82ux88kkyjrujgizh3za9bw14c0xdq3weh6acb6lgegxfhwqtak22q76byar85qi8zo0h3bgufjsnl0cy1i96eolxzhwczw4hnozjebbg2s5kbq1nsp5spepjr21ncqodhvyn8kr98j8ddrtdar4fen2tdy1rzxiv9gfcyqdy3bf0fzougkv11fgq3ndp0fut7mgl3l6xaulgoer8vszvbc9oo0djvec7cxyh2p2voytowfzsgfpugehpen7gqn0k5b7ffrwhv6jniz5759v9zoryy80v3gdm2jkryw70rt3e17zczy3cgh0mvic5qi8tagtpzm2dvxjay2tt9gp42vnnm8f3yz0uicwsmp4drxn297dw47ucbbccf9966b0usmikomxyvxmsb0fjzktuxv9muvtxe0ab0sy7entjok8hm2b5phtd3bdimzjq42u621twsq71kq6s7d2i6lsua5qon4s8rt2eh775i6wugja6twnozu7iasvl43ce6vtsdmq1lcue6j57sr68d03mhwhn44vapsi755iz7j0zaqtbr33x3f0siyfvvn07jcax0mb96zuvugav1kzpduojtfinhu0mqr16c3gijhau42kyjsihdkcee4x7ag6euswg0ryy5i4ayjka245fn7z6i7wxxthq0p7gtv6f9j4s3ygf4lw9jzs18i1j1phudwyw3gwcoraanfxz8vljlsruwe4em2um2az2bkokxajf2zl5lpb8bagwpxav6y9fmxk8jq2nfmhl1g7oubfi0zx282mjr95lt20f8e7dqasvvglpgp8u3x5c540n5gyiebp1iz40zc15wkwjevmg38w3yy4928pm8ncitnvnn5ahcr4g1eg8fmu06eiedtoje5xgszq7y90bot5htfy4mrlkv7ny277xc4zgma2djn7tslc7imx23gtb2tgqktyonia34102yinb5nmmvn3tw86ufkzgz9vhzg2mrbz8q0hcj2pldw2dytmvu1p0gtxrqcf2mp70ieryh3p338cmoknjds6rz07v639z21nnjx04wt6sz9acgmftpwjhk1882ua3oewr4lkpt1q04dzv950qw9o0746ogg7tfobxhgffwewc8gblk7c83qzgrd8w4w9xa7ccip92xrrzsow9u55hfnj1oh00byt6izym6q94v5bg3rlz4ev5zatfvywgxbkwen82kkwp4y94qpf502pt0znp01f723vqytgjuuv7f6dqa5ucii2lal0nvesblfcjlhivc3wruei1pn2fxn75mhho9e1mxrpklsui78ez6eu0e0fi44g0hx1y58yzyhaa8udquz8wlcz7qvg765cfjmum32ezzbhkf0s98uyvhh82if2a76aaab7bfngtisbm9seust4rtvoinhwm47dv7v7cjzexam9wo55owfizvzh5kgtg6b5ec3linoxt5mvc3177ht2viv3hyhgpns2bha320eqiay5dvne6vnmnrdsa8ujvlkq9cuytxx1z06gus8i8ku63ct8re452ld0gib0vryiued071u6h28rz2u4277cnuu592d6oiaweh0f7pr40qlofqrx6xu697bx46pkegxs7dmtmd54bozaxyoo1ut90tor527zl1kcdclzh1mcezb1eqmi8kekfvka6gv2gv2pt7mzae7ptp68g9t86q4kf7e140swuzof0k7l4salfmwnblwizmgkp0ib3eico5299tydj4tijg2c5zf7t9jljhrf7yt2z78ask2hml9yxae1ndo1zq0thnb0hyxz03h1wkjd0xl3dcccuyu6ty3v627g59oqf8onyx3ts4czxvgn42honrv4tzm250d8xtisynp1rz6v9x19vi1f5imbucg1uxbt5py3gz7dt916b9g9t2w02w28brqranexmvcsfjbhcdnufag8pm1s74qf7xhxwmm7antot3p08akh7mu17je9kfrgi6fgwhz4j0oh9oojokjksh7hc8ci182z7t3unieln1kgw17c8pqsauqh3t7o9lz79w49gecar4s0koa4lpcwydc7ct7k847ty4edi68d29hz437upvr9dm4hikxbixd2xkap74gcoa132o379x2zja0s1tnkeakptr1mi0kbe8inf3jf000rh2lc1mfect49sdw97rbx28t22750d9nn4q13wxeh58licaadkyx131jy6aupgvk08zlim27k7d49c3spozgyahtwdethn002al2p43ppz66yy7700orpxah5vgvjwqkqcnm5cenydscanslwio07t19pig1mzsi4xwzei52z9hiwd7zlp6x8nb9kr8v6zzw7ujiym8u6lp9sxy35xkqas8zxe58htbpx02dyjqai6h1e4035q7e62plgx6zjak21t1j0p224ukkm686jfthz3tk7wy2ymg4i15zr35cbtz815kyj4cswig9ww9vfo9vfyvixs4txrwt63d0l8sc7o01mib67b4r7d6s34zlkdg2gmf13wmyao6vign72mock33q51gvy4odtrvhf17v31akwhntex6l0a5hp1x8xca5vl0l0m9nzdx00wsbxiyin82d8ljgckt531jbsml3mg3ddcjfvpfv497hlc8uw1gn7ho6ta8xksng598t5krili4ux4tqemltyt1kbg7pol0w1l7y221o1hdsah16jpe6jxrewhrdxaecmr9aoxzur1cg3chb7mae1ii2to074k86a9a9k5sygsap3hbkeqh85aawcsflcxwij62xivbfh9c8zbsyi2oz9zlhv1o9g3k4if1wiaqeum37yli48wi8zhm9vfo858pi8o15a8fd2e2scts1rpjsik9gjolb9agwcf6mbhgavif92nv3rhepd1zt9qv1vll1bop5ymzdmp375w0o08uudwkfkwvg72m7omm5goo4ghvcvjyy93bpnxfkgb3g0a12hm8abqmfr5hignrb4peg4zunpfn6wxnl6ocbbot76v1zbaazdqdl3i4mkbl07tlutu86m7khee7pfrndonoj6p1yo92y1mlk1y6yr2e2h8jvqm8qafwabdt6jwxs4vbref3z4dnpnezwnmnq8lsz9eet8obh84p16tbh3eml3qme66lslsoq89xrgzv0drg4vtwwi20j0f1jzsbo9w64tqmzqvnm25wlt5w750mdmu0g2nny64n4u6kqehff4016ldjnbv28wnz2oh1w8krppceoy3wt8dj3unvrhnpj7xo8rqat1wxnk3v6zv4a5vcnbcpfi4oi4koeztjj59uv3wjttdm4k0whz6o19291u0mbxv1yvvxme8awl1kvp0ullvq7g2twojbvk6eih94kklrsiqwil9iut8aiaatkoxevya35yxgmhr3uyd7uo7urdb9exkkspjvzh346h5np3oajegq03vb144icqd4lszhsnig6egcmmdzy6cem5r8mva0xskk6bqlro6aihmy8bgtc6heeq1ly7qixxpto3mwww5g5bhv7m1prugpvja1xb5e934ty1b5cfs6qghhh976ejlgoadurviztfsfu4ycilj7t12e394kgtkyp2aa3d88hnqdd3tlp508ptbcttebzplchmmpil7pn8r2dvb26jq2zmucd934grhh4fkhrmr1bnxo020z1fppyr2wty01rs94qok0k6idpjwjpgnb89cn88qekgzuf6yyb30vnwdm5e051duwpsbx3khcujint63az6icqynnfhutjkuzf8fpjp04iu3hl6m0w7evu4cfglap2seo == \w\c\b\t\y\8\l\1\t\v\9\z\2\l\e\4\r\p\b\x\u\9\y\u\e\c\2\y\8\o\w\h\w\5\l\4\v\0\t\h\h\i\y\x\5\f\p\o\j\0\b\7\g\1\w\4\x\k\h\o\7\w\o\u\i\b\s\3\z\8\8\r\t\i\z\f\7\g\0\2\2\9\k\o\8\3\z\z\g\v\l\l\1\1\8\8\p\c\n\f\w\y\4\e\t\0\k\m\6\g\o\4\p\u\n\c\9\6\k\3\1\a\1\p\f\a\a\i\1\g\2\l\t\6\2\o\x\e\1\u\s\w\8\n\r\m\y\0\z\0\g\1\m\9\t\q\o\h\g\1\0\h\n\7\8\h\9\k\x\4\d\o\k\o\x\y\3\4\u\j\p\l\g\i\j\c\z\v\4\3\u\k\h\3\h\d\a\k\j\p\4\u\i\y\s\r\t\i\o\w\3\s\z\c\6\a\u\v\1\5\z\r\s\5\h\m\x\e\2\6\a\u\a\r\1\8\p\5\c\v\n\g\l\v\0\s\r\4\v\l\8\2\u\x\8\8\k\k\y\j\r\u\j\g\i\z\h\3\z\a\9\b\w\1\4\c\0\x\d\q\3\w\e\h\6\a\c\b\6\l\g\e\g\x\f\h\w\q\t\a\k\2\2\q\7\6\b\y\a\r\8\5\q\i\8\z\o\0\h\3\b\g\u\f\j\s\n\l\0\c\y\1\i\9\6\e\o\l\x\z\h\w\c\z\w\4\h\n\o\z\j\e\b\b\g\2\s\5\k\b\q\1\n\s\p\5\s\p\e\p\j\r\2\1\n\c\q\o\d\h\v\y\n\8\k\r\9\8\j\8\d\d\r\t\d\a\r\4\f\e\n\2\t\d\y\1\r\z\x\i\v\9\g\f\c\y\q\d\y\3\b\f\0\f\z\o\u\g\k\v\1\1\f\g\q\3\n\d\p\0\f\u\t\7\m\g\l\3\l\6\x\a\u\l\g\o\e\r\8\v\s\z\v\b\c\9\o\o\0\d\j\v\e\c\7\c\x\y\h\2\p\2\v\o\y\t\o\w\f\z\s\g\f\p\u\g\e\h\p\e\n\7\g\q\n\0\k\5\b\7\f\f\r\w\h\v\6\j\n\i\z\5\7\5\9\v\9\z\o\r\y\y\8\0\v\3\g\d\m\2\j\k\r\y\w\7\0\r\t\3\e\1\7\z\c\z\y\3\c\g\h\0\m\v\i\c\5\q\i\8\t\a\g\t\p\z\m\2\d\v\x\j\a\y\2\t\t\9\g\p\4\2\v\n\n\m\8\f\3\y\z\0\u\i\c\w\s\m\p\4\d\r\x\n\2\9\7\d\w\4\7\u\c\b\b\c\c\f\9\9\6\6\b\0\u\s\m\i\k\o\m\x\y\v\x\m\s\b\0\f\j\z\k\t\u\x\v\9\m\u\v\t\x\e\0\a\b\0\s\y\7\e\n\t\j\o\k\8\h\m\2\b\5\p\h\t\d\3\b\d\i\m\z\j\q\4\2\u\6\2\1\t\w\s\q\7\1\k\q\6\s\7\d\2\i\6\l\s\u\a\5\q\o\n\4\s\8\r\t\2\e\h\7\7\5\i\6\w\u\g\j\a\6\t\w\n\o\z\u\7\i\a\s\v\l\4\3\c\e\6\v\t\s\d\m\q\1\l\c\u\e\6\j\5\7\s\r\6\8\d\0\3\m\h\w\h\n\4\4\v\a\p\s\i\7\5\5\i\z\7\j\0\z\a\q\t\b\r\3\3\x\3\f\0\s\i\y\f\v\v\n\0\7\j\c\a\x\0\m\b\9\6\z\u\v\u\g\a\v\1\k\z\p\d\u\o\j\t\f\i\n\h\u\0\m\q\r\1\6\c\3\g\i\j\h\a\u\4\2\k\y\j\s\i\h\d\k\c\e\e\4\x\7\a\g\6\e\u\s\w\g\0\r\y\y\5\i\4\a\y\j\k\a\2\4\5\f\n\7\z\6\i\7\w\x\x\t\h\q\0\p\7\g\t\v\6\f\9\j\4\s\3\y\g\f\4\l\w\9\j\z\s\1\8\i\1\j\1\p\h\u\d\w\y\w\3\g\w\c\o\r\a\a\n\f\x\z\8\v\l\j\l\s\r\u\w\e\4\e\m\2\u\m\2\a\z\2\b\k\o\k\x\a\j\f\2\z\l\5\l\p\b\8\b\a\g\w\p\x\a\v\6\y\9\f\m\x\k\8\j\q\2\n\f\m\h\l\1\g\7\o\u\b\f\i\0\z\x\2\8\2\m\j\r\9\5\l\t\2\0\f\8\e\7\d\q\a\s\v\v\g\l\p\g\p\8\u\3\x\5\c\5\4\0\n\5\g\y\i\e\b\p\1\i\z\4\0\z\c\1\5\w\k\w\j\e\v\m\g\3\8\w\3\y\y\4\9\2\8\p\m\8\n\c\i\t\n\v\n\n\5\a\h\c\r\4\g\1\e\g\8\f\m\u\0\6\e\i\e\d\t\o\j\e\5\x\g\s\z\q\7\y\9\0\b\o\t\5\h\t\f\y\4\m\r\l\k\v\7\n\y\2\7\7\x\c\4\z\g\m\a\2\d\j\n\7\t\s\l\c\7\i\m\x\2\3\g\t\b\2\t\g\q\k\t\y\o\n\i\a\3\4\1\0\2\y\i\n\b\5\n\m\m\v\n\3\t\w\8\6\u\f\k\z\g\z\9\v\h\z\g\2\m\r\b\z\8\q\0\h\c\j\2\p\l\d\w\2\d\y\t\m\v\u\1\p\0\g\t\x\r\q\c\f\2\m\p\7\0\i\e\r\y\h\3\p\3\3\8\c\m\o\k\n\j\d\s\6\r\z\0\7\v\6\3\9\z\2\1\n\n\j\x\0\4\w\t\6\s\z\9\a\c\g\m\f\t\p\w\j\h\k\1\8\8\2\u\a\3\o\e\w\r\4\l\k\p\t\1\q\0\4\d\z\v\9\5\0\q\w\9\o\0\7\4\6\o\g\g\7\t\f\o\b\x\h\g\f\f\w\e\w\c\8\g\b\l\k\7\c\8\3\q\z\g\r\d\8\w\4\w\9\x\a\7\c\c\i\p\9\2\x\r\r\z\s\o\w\9\u\5\5\h\f\n\j\1\o\h\0\0\b\y\t\6\i\z\y\m\6\q\9\4\v\5\b\g\3\r\l\z\4\e\v\5\z\a\t\f\v\y\w\g\x\b\k\w\e\n\8\2\k\k\w\p\4\y\9\4\q\p\f\5\0\2\p\t\0\z\n\p\0\1\f\7\2\3\v\q\y\t\g\j\u\u\v\7\f\6\d\q\a\5\u\c\i\i\2\l\a\l\0\n\v\e\s\b\l\f\c\j\l\h\i\v\c\3\w\r\u\e\i\1\p\n\2\f\x\n\7\5\m\h\h\o\9\e\1\m\x\r\p\k\l\s\u\i\7\8\e\z\6\e\u\0\e\0\f\i\4\4\g\0\h\x\1\y\5\8\y\z\y\h\a\a\8\u\d\q\u\z\8\w\l\c\z\7\q\v\g\7\6\5\c\f\j\m\u\m\3\2\e\z\z\b\h\k\f\0\s\9\8\u\y\v\h\h\8\2\i\f\2\a\7\6\a\a\a\b\7\b\f\n\g\t\i\s\b\m\9\s\e\u\s\t\4\r\t\v\o\i\n\h\w\m\4\7\d\v\7\v\7\c\j\z\e\x\a\m\9\w\o\5\5\o\w\f\i\z\v\z\h\5\k\g\t\g\6\b\5\e\c\3\l\i\n\o\x\t\5\m\v\c\3\1\7\7\h\t\2\v\i\v\3\h\y\h\g\p\n\s\2\b\h\a\3\2\0\e\q\i\a\y\5\d\v\n\e\6\v\n\m\n\r\d\s\a\8\u\j\v\l\k\q\9\c\u\y\t\x\x\1\z\0\6\g\u\s\8\i\8\k\u\6\3\c\t\8\r\e\4\5\2\l\d\0\g\i\b\0\v\r\y\i\u\e\d\0\7\1\u\6\h\2\8\r\z\2\u\4\2\7\7\c\n\u\u\5\9\2\d\6\o\i\a\w\e\h\0\f\7\p\r\4\0\q\l\o\f\q\r\x\6\x\u\6\9\7\b\x\4\6\p\k\e\g\x\s\7\d\m\t\m\d\5\4\b\o\z\a\x\y\o\o\1\u\t\9\0\t\o\r\5\2\7\z\l\1\k\c\d\c\l\z\h\1\m\c\e\z\b\1\e\q\m\i\8\k\e\k\f\v\k\a\6\g\v\2\g\v\2\p\t\7\m\z\a\e\7\p\t\p\6\8\g\9\t\8\6\q\4\k\f\7\e\1\4\0\s\w\u\z\o\f\0\k\7\l\4\s\a\l\f\m\w\n\b\l\w\i\z\m\g\k\p\0\i\b\3\e\i\c\o\5\2\9\9\t\y\d\j\4\t\i\j\g\2\c\5\z\f\7\t\9\j\l\j\h\r\f\7\y\t\2\z\7\8\a\s\k\2\h\m\l\9\y\x\a\e\1\n\d\o\1\z\q\0\t\h\n\b\0\h\y\x\z\0\3\h\1\w\k\j\d\0\x\l\3\d\c\c\c\u\y\u\6\t\y\3\v\6\2\7\g\5\9\o\q\f\8\o\n\y\x\3\t\s\4\c\z\x\v\g\n\4\2\h\o\n\r\v\4\t\z\m\2\5\0\d\8\x\t\i\s\y\n\p\1\r\z\6\v\9\x\1\9\v\i\1\f\5\i\m\b\u\c\g\1\u\x\b\t\5\p\y\3\g\z\7\d\t\9\1\6\b\9\g\9\t\2\w\0\2\w\2\8\b\r\q\r\a\n\e\x\m\v\c\s\f\j\b\h\c\d\n\u\f\a\g\8\p\m\1\s\7\4\q\f\7\x\h\x\w\m\m\7\a\n\t\o\t\3\p\0\8\a\k\h\7\m\u\1\7\j\e\9\k\f\r\g\i\6\f\g\w\h\z\4\j\0\o\h\9\o\o\j\o\k\j\k\s\h\7\h\c\8\c\i\1\8\2\z\7\t\3\u\n\i\e\l\n\1\k\g\w\1\7\c\8\p\q\s\a\u\q\h\3\t\7\o\9\l\z\7\9\w\4\9\g\e\c\a\r\4\s\0\k\o\a\4\l\p\c\w\y\d\c\7\c\t\7\k\8\4\7\t\y\4\e\d\i\6\8\d\2\9\h\z\4\3\7\u\p\v\r\9\d\m\4\h\i\k\x\b\i\x\d\2\x\k\a\p\7\4\g\c\o\a\1\3\2\o\3\7\9\x\2\z\j\a\0\s\1\t\n\k\e\a\k\p\t\r\1\m\i\0\k\b\e\8\i\n\f\3\j\f\0\0\0\r\h\2\l\c\1\m\f\e\c\t\4\9\s\d\w\9\7\r\b\x\2\8\t\2\2\7\5\0\d\9\n\n\4\q\1\3\w\x\e\h\5\8\l\i\c\a\a\d\k\y\x\1\3\1\j\y\6\a\u\p\g\v\k\0\8\z\l\i\m\2\7\k\7\d\4\9\c\3\s\p\o\z\g\y\a\h\t\w\d\e\t\h\n\0\0\2\a\l\2\p\4\3\p\p\z\6\6\y\y\7\7\0\0\o\r\p\x\a\h\5\v\g\v\j\w\q\k\q\c\n\m\5\c\e\n\y\d\s\c\a\n\s\l\w\i\o\0\7\t\1\9\p\i\g\1\m\z\s\i\4\x\w\z\e\i\5\2\z\9\h\i\w\d\7\z\l\p\6\x\8\n\b\9\k\r\8\v\6\z\z\w\7\u\j\i\y\m\8\u\6\l\p\9\s\x\y\3\5\x\k\q\a\s\8\z\x\e\5\8\h\t\b\p\x\0\2\d\y\j\q\a\i\6\h\1\e\4\0\3\5\q\7\e\6\2\p\l\g\x\6\z\j\a\k\2\1\t\1\j\0\p\2\2\4\u\k\k\m\6\8\6\j\f\t\h\z\3\t\k\7\w\y\2\y\m\g\4\i\1\5\z\r\3\5\c\b\t\z\8\1\5\k\y\j\4\c\s\w\i\g\9\w\w\9\v\f\o\9\v\f\y\v\i\x\s\4\t\x\r\w\t\6\3\d\0\l\8\s\c\7\o\0\1\m\i\b\6\7\b\4\r\7\d\6\s\3\4\z\l\k\d\g\2\g\m\f\1\3\w\m\y\a\o\6\v\i\g\n\7\2\m\o\c\k\3\3\q\5\1\g\v\y\4\o\d\t\r\v\h\f\1\7\v\3\1\a\k\w\h\n\t\e\x\6\l\0\a\5\h\p\1\x\8\x\c\a\5\v\l\0\l\0\m\9\n\z\d\x\0\0\w\s\b\x\i\y\i\n\8\2\d\8\l\j\g\c\k\t\5\3\1\j\b\s\m\l\3\m\g\3\d\d\c\j\f\v\p\f\v\4\9\7\h\l\c\8\u\w\1\g\n\7\h\o\6\t\a\8\x\k\s\n\g\5\9\8\t\5\k\r\i\l\i\4\u\x\4\t\q\e\m\l\t\y\t\1\k\b\g\7\p\o\l\0\w\1\l\7\y\2\2\1\o\1\h\d\s\a\h\1\6\j\p\e\6\j\x\r\e\w\h\r\d\x\a\e\c\m\r\9\a\o\x\z\u\r\1\c\g\3\c\h\b\7\m\a\e\1\i\i\2\t\o\0\7\4\k\8\6\a\9\a\9\k\5\s\y\g\s\a\p\3\h\b\k\e\q\h\8\5\a\a\w\c\s\f\l\c\x\w\i\j\6\2\x\i\v\b\f\h\9\c\8\z\b\s\y\i\2\o\z\9\z\l\h\v\1\o\9\g\3\k\4\i\f\1\w\i\a\q\e\u\m\3\7\y\l\i\4\8\w\i\8\z\h\m\9\v\f\o\8\5\8\p\i\8\o\1\5\a\8\f\d\2\e\2\s\c\t\s\1\r\p\j\s\i\k\9\g\j\o\l\b\9\a\g\w\c\f\6\m\b\h\g\a\v\i\f\9\2\n\v\3\r\h\e\p\d\1\z\t\9\q\v\1\v\l\l\1\b\o\p\5\y\m\z\d\m\p\3\7\5\w\0\o\0\8\u\u\d\w\k\f\k\w\v\g\7\2\m\7\o\m\m\5\g\o\o\4\g\h\v\c\v\j\y\y\9\3\b\p\n\x\f\k\g\b\3\g\0\a\1\2\h\m\8\a\b\q\m\f\r\5\h\i\g\n\r\b\4\p\e\g\4\z\u\n\p\f\n\6\w\x\n\l\6\o\c\b\b\o\t\7\6\v\1\z\b\a\a\z\d\q\d\l\3\i\4\m\k\b\l\0\7\t\l\u\t\u\8\6\m\7\k\h\e\e\7\p\f\r\n\d\o\n\o\j\6\p\1\y\o\9\2\y\1\m\l\k\1\y\6\y\r\2\e\2\h\8\j\v\q\m\8\q\a\f\w\a\b\d\t\6\j\w\x\s\4\v\b\r\e\f\3\z\4\d\n\p\n\e\z\w\n\m\n\q\8\l\s\z\9\e\e\t\8\o\b\h\8\4\p\1\6\t\b\h\3\e\m\l\3\q\m\e\6\6\l\s\l\s\o\q\8\9\x\r\g\z\v\0\d\r\g\4\v\t\w\w\i\2\0\j\0\f\1\j\z\s\b\o\9\w\6\4\t\q\m\z\q\v\n\m\2\5\w\l\t\5\w\7\5\0\m\d\m\u\0\g\2\n\n\y\6\4\n\4\u\6\k\q\e\h\f\f\4\0\1\6\l\d\j\n\b\v\2\8\w\n\z\2\o\h\1\w\8\k\r\p\p\c\e\o\y\3\w\t\8\d\j\3\u\n\v\r\h\n\p\j\7\x\o\8\r\q\a\t\1\w\x\n\k\3\v\6\z\v\4\a\5\v\c\n\b\c\p\f\i\4\o\i\4\k\o\e\z\t\j\j\5\9\u\v\3\w\j\t\t\d\m\4\k\0\w\h\z\6\o\1\9\2\9\1\u\0\m\b\x\v\1\y\v\v\x\m\e\8\a\w\l\1\k\v\p\0\u\l\l\v\q\7\g\2\t\w\o\j\b\v\k\6\e\i\h\9\4\k\k\l\r\s\i\q\w\i\l\9\i\u\t\8\a\i\a\a\t\k\o\x\e\v\y\a\3\5\y\x\g\m\h\r\3\u\y\d\7\u\o\7\u\r\d\b\9\e\x\k\k\s\p\j\v\z\h\3\4\6\h\5\n\p\3\o\a\j\e\g\q\0\3\v\b\1\4\4\i\c\q\d\4\l\s\z\h\s\n\i\g\6\e\g\c\m\m\d\z\y\6\c\e\m\5\r\8\m\v\a\0\x\s\k\k\6\b\q\l\r\o\6\a\i\h\m\y\8\b\g\t\c\6\h\e\e\q\1\l\y\7\q\i\x\x\p\t\o\3\m\w\w\w\5\g\5\b\h\v\7\m\1\p\r\u\g\p\v\j\a\1\x\b\5\e\9\3\4\t\y\1\b\5\c\f\s\6\q\g\h\h\h\9\7\6\e\j\l\g\o\a\d\u\r\v\i\z\t\f\s\f\u\4\y\c\i\l\j\7\t\1\2\e\3\9\4\k\g\t\k\y\p\2\a\a\3\d\8\8\h\n\q\d\d\3\t\l\p\5\0\8\p\t\b\c\t\t\e\b\z\p\l\c\h\m\m\p\i\l\7\p\n\8\r\2\d\v\b\2\6\j\q\2\z\m\u\c\d\9\3\4\g\r\h\h\4\f\k\h\r\m\r\1\b\n\x\o\0\2\0\z\1\f\p\p\y\r\2\w\t\y\0\1\r\s\9\4\q\o\k\0\k\6\i\d\p\j\w\j\p\g\n\b\8\9\c\n\8\8\q\e\k\g\z\u\f\6\y\y\b\3\0\v\n\w\d\m\5\e\0\5\1\d\u\w\p\s\b\x\3\k\h\c\u\j\i\n\t\6\3\a\z\6\i\c\q\y\n\n\f\h\u\t\j\k\u\z\f\8\f\p\j\p\0\4\i\u\3\h\l\6\m\0\w\7\e\v\u\4\c\f\g\l\a\p\2\s\e\o ]] 00:10:41.259 ************************************ 00:10:41.259 END TEST dd_rw_offset 00:10:41.259 ************************************ 00:10:41.259 00:10:41.259 real 0m1.071s 00:10:41.259 user 0m0.735s 00:10:41.259 sys 0m0.431s 00:10:41.259 05:20:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:41.259 05:20:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:10:41.259 05:20:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:10:41.259 05:20:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:10:41.259 05:20:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:41.259 05:20:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:41.259 05:20:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:10:41.259 05:20:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:41.259 05:20:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:10:41.259 05:20:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:41.259 05:20:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:10:41.518 05:20:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:41.518 05:20:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:41.518 [2024-11-20 05:20:55.845311] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:41.518 [2024-11-20 05:20:55.845466] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60410 ] 00:10:41.518 { 00:10:41.518 "subsystems": [ 00:10:41.518 { 00:10:41.518 "subsystem": "bdev", 00:10:41.518 "config": [ 00:10:41.518 { 00:10:41.518 "params": { 00:10:41.518 "trtype": "pcie", 00:10:41.518 "traddr": "0000:00:10.0", 00:10:41.518 "name": "Nvme0" 00:10:41.518 }, 00:10:41.518 "method": "bdev_nvme_attach_controller" 00:10:41.518 }, 00:10:41.518 { 00:10:41.518 "method": "bdev_wait_for_examine" 00:10:41.518 } 00:10:41.518 ] 00:10:41.518 } 00:10:41.518 ] 00:10:41.518 } 00:10:41.518 [2024-11-20 05:20:55.996458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.779 [2024-11-20 05:20:56.030494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.779 [2024-11-20 05:20:56.061554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:41.779  [2024-11-20T05:20:56.551Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:10:42.038 00:10:42.038 05:20:56 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:42.038 00:10:42.038 real 0m15.624s 00:10:42.038 user 0m11.592s 00:10:42.038 sys 0m4.762s 00:10:42.038 05:20:56 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:42.038 05:20:56 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:42.038 ************************************ 00:10:42.038 END TEST spdk_dd_basic_rw 00:10:42.038 ************************************ 00:10:42.038 05:20:56 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:10:42.038 05:20:56 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:42.038 05:20:56 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:42.038 05:20:56 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:42.038 ************************************ 00:10:42.038 START TEST spdk_dd_posix 00:10:42.038 ************************************ 00:10:42.038 05:20:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:10:42.038 * Looking for test storage... 00:10:42.038 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:42.038 05:20:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:42.038 05:20:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:42.038 05:20:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # lcov --version 00:10:42.038 05:20:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:42.038 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:42.038 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:42.038 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:42.038 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:10:42.038 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:10:42.038 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:10:42.038 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:10:42.038 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:10:42.038 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:10:42.038 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:10:42.038 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:42.038 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:10:42.038 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:10:42.038 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:42.038 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:42.038 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:10:42.038 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:10:42.038 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:42.038 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:10:42.038 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:10:42.297 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:10:42.297 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:10:42.297 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:42.297 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:10:42.297 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:10:42.297 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:42.297 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:42.297 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:10:42.297 05:20:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:42.297 05:20:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:42.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.297 --rc genhtml_branch_coverage=1 00:10:42.297 --rc genhtml_function_coverage=1 00:10:42.297 --rc genhtml_legend=1 00:10:42.297 --rc geninfo_all_blocks=1 00:10:42.297 --rc geninfo_unexecuted_blocks=1 00:10:42.297 00:10:42.297 ' 00:10:42.297 05:20:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:42.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.297 --rc genhtml_branch_coverage=1 00:10:42.297 --rc genhtml_function_coverage=1 00:10:42.297 --rc genhtml_legend=1 00:10:42.297 --rc geninfo_all_blocks=1 00:10:42.297 --rc geninfo_unexecuted_blocks=1 00:10:42.297 00:10:42.297 ' 00:10:42.297 05:20:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:42.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.297 --rc genhtml_branch_coverage=1 00:10:42.298 --rc genhtml_function_coverage=1 00:10:42.298 --rc genhtml_legend=1 00:10:42.298 --rc geninfo_all_blocks=1 00:10:42.298 --rc geninfo_unexecuted_blocks=1 00:10:42.298 00:10:42.298 ' 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:42.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.298 --rc genhtml_branch_coverage=1 00:10:42.298 --rc genhtml_function_coverage=1 00:10:42.298 --rc genhtml_legend=1 00:10:42.298 --rc geninfo_all_blocks=1 00:10:42.298 --rc geninfo_unexecuted_blocks=1 00:10:42.298 00:10:42.298 ' 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:10:42.298 * First test run, liburing in use 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:42.298 ************************************ 00:10:42.298 START TEST dd_flag_append 00:10:42.298 ************************************ 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1127 -- # append 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=780d0wlx2yccdtazig1fcecn8fcoevdy 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=j1cy9gbefq7uev92opni48y1r5rkhswe 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 780d0wlx2yccdtazig1fcecn8fcoevdy 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s j1cy9gbefq7uev92opni48y1r5rkhswe 00:10:42.298 05:20:56 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:10:42.298 [2024-11-20 05:20:56.635077] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:42.298 [2024-11-20 05:20:56.635199] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60477 ] 00:10:42.298 [2024-11-20 05:20:56.787080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.556 [2024-11-20 05:20:56.829021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.556 [2024-11-20 05:20:56.864775] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:42.556  [2024-11-20T05:20:57.069Z] Copying: 32/32 [B] (average 31 kBps) 00:10:42.556 00:10:42.556 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ j1cy9gbefq7uev92opni48y1r5rkhswe780d0wlx2yccdtazig1fcecn8fcoevdy == \j\1\c\y\9\g\b\e\f\q\7\u\e\v\9\2\o\p\n\i\4\8\y\1\r\5\r\k\h\s\w\e\7\8\0\d\0\w\l\x\2\y\c\c\d\t\a\z\i\g\1\f\c\e\c\n\8\f\c\o\e\v\d\y ]] 00:10:42.556 00:10:42.556 real 0m0.464s 00:10:42.556 user 0m0.246s 00:10:42.556 sys 0m0.200s 00:10:42.556 ************************************ 00:10:42.556 END TEST dd_flag_append 00:10:42.556 ************************************ 00:10:42.556 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:42.556 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:10:42.815 05:20:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:10:42.815 05:20:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:42.815 05:20:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:42.815 05:20:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:42.815 ************************************ 00:10:42.815 START TEST dd_flag_directory 00:10:42.815 ************************************ 00:10:42.815 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1127 -- # directory 00:10:42.815 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:42.815 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:10:42.815 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:42.815 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:42.815 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:42.815 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:42.815 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:42.815 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:42.815 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:42.815 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:42.815 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:42.815 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:42.815 [2024-11-20 05:20:57.153558] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:42.815 [2024-11-20 05:20:57.153660] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60505 ] 00:10:42.815 [2024-11-20 05:20:57.306508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.075 [2024-11-20 05:20:57.371131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.075 [2024-11-20 05:20:57.408316] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:43.075 [2024-11-20 05:20:57.430309] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:43.075 [2024-11-20 05:20:57.430384] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:43.075 [2024-11-20 05:20:57.430407] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:43.075 [2024-11-20 05:20:57.502233] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:43.075 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:10:43.075 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:43.075 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:10:43.075 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:10:43.075 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:10:43.075 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:43.075 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:43.075 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:10:43.075 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:43.075 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:43.075 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:43.075 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:43.075 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:43.075 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:43.075 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:43.075 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:43.075 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:43.075 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:43.334 [2024-11-20 05:20:57.622789] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:43.334 [2024-11-20 05:20:57.622926] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60515 ] 00:10:43.334 [2024-11-20 05:20:57.775511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.334 [2024-11-20 05:20:57.815633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.593 [2024-11-20 05:20:57.849017] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:43.593 [2024-11-20 05:20:57.870981] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:43.593 [2024-11-20 05:20:57.871053] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:43.593 [2024-11-20 05:20:57.871079] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:43.593 [2024-11-20 05:20:57.940261] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:43.593 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:10:43.593 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:43.593 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:10:43.593 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:10:43.593 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:10:43.593 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:43.593 00:10:43.593 real 0m0.903s 00:10:43.593 user 0m0.498s 00:10:43.593 sys 0m0.195s 00:10:43.593 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:43.593 ************************************ 00:10:43.593 END TEST dd_flag_directory 00:10:43.593 ************************************ 00:10:43.593 05:20:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:10:43.593 05:20:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:10:43.593 05:20:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:43.593 05:20:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:43.593 05:20:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:43.593 ************************************ 00:10:43.593 START TEST dd_flag_nofollow 00:10:43.593 ************************************ 00:10:43.593 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1127 -- # nofollow 00:10:43.593 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:10:43.593 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:10:43.593 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:10:43.593 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:10:43.593 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:43.593 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:10:43.593 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:43.593 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:43.593 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:43.593 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:43.593 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:43.593 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:43.593 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:43.593 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:43.593 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:43.593 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:43.852 [2024-11-20 05:20:58.110327] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:43.852 [2024-11-20 05:20:58.110447] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60543 ] 00:10:43.852 [2024-11-20 05:20:58.263302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.852 [2024-11-20 05:20:58.303648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.852 [2024-11-20 05:20:58.336823] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:43.852 [2024-11-20 05:20:58.359267] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:10:43.852 [2024-11-20 05:20:58.359352] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:10:43.852 [2024-11-20 05:20:58.359398] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:44.111 [2024-11-20 05:20:58.430107] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:44.111 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:10:44.111 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:44.111 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:10:44.111 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:10:44.111 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:10:44.111 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:44.111 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:44.111 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:10:44.111 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:44.111 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:44.111 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:44.111 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:44.111 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:44.111 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:44.111 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:44.111 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:44.111 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:44.111 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:44.111 [2024-11-20 05:20:58.552829] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:44.111 [2024-11-20 05:20:58.552955] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60553 ] 00:10:44.370 [2024-11-20 05:20:58.705077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.370 [2024-11-20 05:20:58.745726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.370 [2024-11-20 05:20:58.779831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:44.370 [2024-11-20 05:20:58.802129] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:10:44.370 [2024-11-20 05:20:58.802202] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:10:44.370 [2024-11-20 05:20:58.802249] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:44.370 [2024-11-20 05:20:58.866608] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:44.629 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:10:44.629 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:44.629 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:10:44.629 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:10:44.629 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:10:44.629 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:44.629 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:10:44.629 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:10:44.629 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:10:44.629 05:20:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:44.629 [2024-11-20 05:20:58.983598] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:44.629 [2024-11-20 05:20:58.983699] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60560 ] 00:10:44.629 [2024-11-20 05:20:59.134603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.895 [2024-11-20 05:20:59.174709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.895 [2024-11-20 05:20:59.206972] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:44.895  [2024-11-20T05:20:59.408Z] Copying: 512/512 [B] (average 500 kBps) 00:10:44.895 00:10:44.895 05:20:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ h54jn2ali39fh0ocl3nnbdzhogfrufkmzh2togvyk1fkxq3ziteskokwundts1elbkdo7ij4gy79xf8q0ani86llx0vvtlmhbl3jprgbrvnpj8qujjhy178jwxge2b40lj4wfr78y52zbbfi4642vjmf96xv7s20rjyonfrd4b8brjmticy0wypbs8mj7togj571r12orzo6exmjph9n4lue529zwhnb4a36mc1lfj65lu7flyhrjwnmkcfryt68vecwsmp4631wg6pvgd2m0i1cr3ga4sssgh3lrkj074mmerjvygj9wmo8p0vgy1smhtd42vt41eywesgk60ee74wxo68xag2tvz71weornjipw92xmlhgo8j1yjmr23azdxyhyhzj84hkmthzodm1hgr2ifrr87tbbobjyqpaeq1sp2lbjbv2p8o0s7h4u7nqhuq26w8pkr0j78sxtovq522yn66teyfj16qqmmetaxgsb7ahvlw5lalww8r62kts == \h\5\4\j\n\2\a\l\i\3\9\f\h\0\o\c\l\3\n\n\b\d\z\h\o\g\f\r\u\f\k\m\z\h\2\t\o\g\v\y\k\1\f\k\x\q\3\z\i\t\e\s\k\o\k\w\u\n\d\t\s\1\e\l\b\k\d\o\7\i\j\4\g\y\7\9\x\f\8\q\0\a\n\i\8\6\l\l\x\0\v\v\t\l\m\h\b\l\3\j\p\r\g\b\r\v\n\p\j\8\q\u\j\j\h\y\1\7\8\j\w\x\g\e\2\b\4\0\l\j\4\w\f\r\7\8\y\5\2\z\b\b\f\i\4\6\4\2\v\j\m\f\9\6\x\v\7\s\2\0\r\j\y\o\n\f\r\d\4\b\8\b\r\j\m\t\i\c\y\0\w\y\p\b\s\8\m\j\7\t\o\g\j\5\7\1\r\1\2\o\r\z\o\6\e\x\m\j\p\h\9\n\4\l\u\e\5\2\9\z\w\h\n\b\4\a\3\6\m\c\1\l\f\j\6\5\l\u\7\f\l\y\h\r\j\w\n\m\k\c\f\r\y\t\6\8\v\e\c\w\s\m\p\4\6\3\1\w\g\6\p\v\g\d\2\m\0\i\1\c\r\3\g\a\4\s\s\s\g\h\3\l\r\k\j\0\7\4\m\m\e\r\j\v\y\g\j\9\w\m\o\8\p\0\v\g\y\1\s\m\h\t\d\4\2\v\t\4\1\e\y\w\e\s\g\k\6\0\e\e\7\4\w\x\o\6\8\x\a\g\2\t\v\z\7\1\w\e\o\r\n\j\i\p\w\9\2\x\m\l\h\g\o\8\j\1\y\j\m\r\2\3\a\z\d\x\y\h\y\h\z\j\8\4\h\k\m\t\h\z\o\d\m\1\h\g\r\2\i\f\r\r\8\7\t\b\b\o\b\j\y\q\p\a\e\q\1\s\p\2\l\b\j\b\v\2\p\8\o\0\s\7\h\4\u\7\n\q\h\u\q\2\6\w\8\p\k\r\0\j\7\8\s\x\t\o\v\q\5\2\2\y\n\6\6\t\e\y\f\j\1\6\q\q\m\m\e\t\a\x\g\s\b\7\a\h\v\l\w\5\l\a\l\w\w\8\r\6\2\k\t\s ]] 00:10:44.895 00:10:44.895 real 0m1.319s 00:10:44.895 user 0m0.703s 00:10:44.895 sys 0m0.387s 00:10:44.895 ************************************ 00:10:44.895 END TEST dd_flag_nofollow 00:10:44.895 ************************************ 00:10:44.895 05:20:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:44.895 05:20:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:10:44.895 05:20:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:10:44.895 05:20:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:44.895 05:20:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:44.895 05:20:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:45.154 ************************************ 00:10:45.154 START TEST dd_flag_noatime 00:10:45.154 ************************************ 00:10:45.154 05:20:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1127 -- # noatime 00:10:45.154 05:20:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:10:45.154 05:20:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:10:45.154 05:20:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:10:45.154 05:20:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:10:45.154 05:20:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:10:45.154 05:20:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:45.154 05:20:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1732080059 00:10:45.154 05:20:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:45.154 05:20:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1732080059 00:10:45.154 05:20:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:10:46.176 05:21:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:46.176 [2024-11-20 05:21:00.495371] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:46.176 [2024-11-20 05:21:00.495466] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60603 ] 00:10:46.176 [2024-11-20 05:21:00.647792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.436 [2024-11-20 05:21:00.689208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.436 [2024-11-20 05:21:00.723337] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:46.436  [2024-11-20T05:21:00.949Z] Copying: 512/512 [B] (average 500 kBps) 00:10:46.436 00:10:46.436 05:21:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:46.436 05:21:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1732080059 )) 00:10:46.436 05:21:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:46.436 05:21:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1732080059 )) 00:10:46.436 05:21:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:46.695 [2024-11-20 05:21:00.965196] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:46.695 [2024-11-20 05:21:00.965298] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60616 ] 00:10:46.695 [2024-11-20 05:21:01.115031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.695 [2024-11-20 05:21:01.165370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.695 [2024-11-20 05:21:01.200531] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:46.954  [2024-11-20T05:21:01.467Z] Copying: 512/512 [B] (average 500 kBps) 00:10:46.954 00:10:46.954 05:21:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:46.954 05:21:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1732080061 )) 00:10:46.954 00:10:46.954 real 0m1.950s 00:10:46.954 user 0m0.507s 00:10:46.954 sys 0m0.418s 00:10:46.954 ************************************ 00:10:46.954 END TEST dd_flag_noatime 00:10:46.954 ************************************ 00:10:46.954 05:21:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:46.954 05:21:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:10:46.954 05:21:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:10:46.954 05:21:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:46.954 05:21:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:46.954 05:21:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:46.954 ************************************ 00:10:46.954 START TEST dd_flags_misc 00:10:46.954 ************************************ 00:10:46.954 05:21:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1127 -- # io 00:10:46.954 05:21:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:10:46.954 05:21:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:10:46.954 05:21:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:10:46.954 05:21:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:10:46.954 05:21:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:10:46.954 05:21:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:10:46.954 05:21:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:10:46.954 05:21:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:46.954 05:21:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:10:47.213 [2024-11-20 05:21:01.473540] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:47.213 [2024-11-20 05:21:01.473639] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60645 ] 00:10:47.213 [2024-11-20 05:21:01.620003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.213 [2024-11-20 05:21:01.654435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.213 [2024-11-20 05:21:01.685210] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:47.213  [2024-11-20T05:21:01.986Z] Copying: 512/512 [B] (average 500 kBps) 00:10:47.473 00:10:47.473 05:21:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ vxxh9ivwwa15b011or0oaimh04hfg66okx3qa0xvwau1nxastv63g46nk8rt6odo7rc8memtse9oya0oiwacyud7osjgtblz4pyf375w2affp535n12yut22hxxuzgjwm35zjy0ks0ki2c3i7tlvtvzh4fpkv7qslqz5hbk2nflemajxepmcj2ovh3kr4wlf3p9gfrkp3mscxlvv1kqxrxwz33v3holj8hgoz4cxjuyqzwv1ku4mutx6u6p3fk46wrpamnc5dv8b8h16acusslkeylxlh6kv21r10yt9iv2x785iob3zh0f04xulvbyiof53m8f5nr9cgayqk2ixpbfw4r5kxb9la0vnxtj2wn28ch3oi4vvlkm2u7qxws5h2rc8fgq7xtlq5kgg09ksddzj6v6jjpojt9nh99pnb05yzd0oirs6i8db4jkpmgafkt88uogalh3lddr313dlijx05mf70t9nc7jbevfypeprdcp950iicz15v0k8i3hi == \v\x\x\h\9\i\v\w\w\a\1\5\b\0\1\1\o\r\0\o\a\i\m\h\0\4\h\f\g\6\6\o\k\x\3\q\a\0\x\v\w\a\u\1\n\x\a\s\t\v\6\3\g\4\6\n\k\8\r\t\6\o\d\o\7\r\c\8\m\e\m\t\s\e\9\o\y\a\0\o\i\w\a\c\y\u\d\7\o\s\j\g\t\b\l\z\4\p\y\f\3\7\5\w\2\a\f\f\p\5\3\5\n\1\2\y\u\t\2\2\h\x\x\u\z\g\j\w\m\3\5\z\j\y\0\k\s\0\k\i\2\c\3\i\7\t\l\v\t\v\z\h\4\f\p\k\v\7\q\s\l\q\z\5\h\b\k\2\n\f\l\e\m\a\j\x\e\p\m\c\j\2\o\v\h\3\k\r\4\w\l\f\3\p\9\g\f\r\k\p\3\m\s\c\x\l\v\v\1\k\q\x\r\x\w\z\3\3\v\3\h\o\l\j\8\h\g\o\z\4\c\x\j\u\y\q\z\w\v\1\k\u\4\m\u\t\x\6\u\6\p\3\f\k\4\6\w\r\p\a\m\n\c\5\d\v\8\b\8\h\1\6\a\c\u\s\s\l\k\e\y\l\x\l\h\6\k\v\2\1\r\1\0\y\t\9\i\v\2\x\7\8\5\i\o\b\3\z\h\0\f\0\4\x\u\l\v\b\y\i\o\f\5\3\m\8\f\5\n\r\9\c\g\a\y\q\k\2\i\x\p\b\f\w\4\r\5\k\x\b\9\l\a\0\v\n\x\t\j\2\w\n\2\8\c\h\3\o\i\4\v\v\l\k\m\2\u\7\q\x\w\s\5\h\2\r\c\8\f\g\q\7\x\t\l\q\5\k\g\g\0\9\k\s\d\d\z\j\6\v\6\j\j\p\o\j\t\9\n\h\9\9\p\n\b\0\5\y\z\d\0\o\i\r\s\6\i\8\d\b\4\j\k\p\m\g\a\f\k\t\8\8\u\o\g\a\l\h\3\l\d\d\r\3\1\3\d\l\i\j\x\0\5\m\f\7\0\t\9\n\c\7\j\b\e\v\f\y\p\e\p\r\d\c\p\9\5\0\i\i\c\z\1\5\v\0\k\8\i\3\h\i ]] 00:10:47.473 05:21:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:47.473 05:21:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:10:47.473 [2024-11-20 05:21:01.885851] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:47.473 [2024-11-20 05:21:01.885961] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60649 ] 00:10:47.731 [2024-11-20 05:21:02.031519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.731 [2024-11-20 05:21:02.066169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.731 [2024-11-20 05:21:02.097284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:47.732  [2024-11-20T05:21:02.245Z] Copying: 512/512 [B] (average 500 kBps) 00:10:47.732 00:10:47.732 05:21:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ vxxh9ivwwa15b011or0oaimh04hfg66okx3qa0xvwau1nxastv63g46nk8rt6odo7rc8memtse9oya0oiwacyud7osjgtblz4pyf375w2affp535n12yut22hxxuzgjwm35zjy0ks0ki2c3i7tlvtvzh4fpkv7qslqz5hbk2nflemajxepmcj2ovh3kr4wlf3p9gfrkp3mscxlvv1kqxrxwz33v3holj8hgoz4cxjuyqzwv1ku4mutx6u6p3fk46wrpamnc5dv8b8h16acusslkeylxlh6kv21r10yt9iv2x785iob3zh0f04xulvbyiof53m8f5nr9cgayqk2ixpbfw4r5kxb9la0vnxtj2wn28ch3oi4vvlkm2u7qxws5h2rc8fgq7xtlq5kgg09ksddzj6v6jjpojt9nh99pnb05yzd0oirs6i8db4jkpmgafkt88uogalh3lddr313dlijx05mf70t9nc7jbevfypeprdcp950iicz15v0k8i3hi == \v\x\x\h\9\i\v\w\w\a\1\5\b\0\1\1\o\r\0\o\a\i\m\h\0\4\h\f\g\6\6\o\k\x\3\q\a\0\x\v\w\a\u\1\n\x\a\s\t\v\6\3\g\4\6\n\k\8\r\t\6\o\d\o\7\r\c\8\m\e\m\t\s\e\9\o\y\a\0\o\i\w\a\c\y\u\d\7\o\s\j\g\t\b\l\z\4\p\y\f\3\7\5\w\2\a\f\f\p\5\3\5\n\1\2\y\u\t\2\2\h\x\x\u\z\g\j\w\m\3\5\z\j\y\0\k\s\0\k\i\2\c\3\i\7\t\l\v\t\v\z\h\4\f\p\k\v\7\q\s\l\q\z\5\h\b\k\2\n\f\l\e\m\a\j\x\e\p\m\c\j\2\o\v\h\3\k\r\4\w\l\f\3\p\9\g\f\r\k\p\3\m\s\c\x\l\v\v\1\k\q\x\r\x\w\z\3\3\v\3\h\o\l\j\8\h\g\o\z\4\c\x\j\u\y\q\z\w\v\1\k\u\4\m\u\t\x\6\u\6\p\3\f\k\4\6\w\r\p\a\m\n\c\5\d\v\8\b\8\h\1\6\a\c\u\s\s\l\k\e\y\l\x\l\h\6\k\v\2\1\r\1\0\y\t\9\i\v\2\x\7\8\5\i\o\b\3\z\h\0\f\0\4\x\u\l\v\b\y\i\o\f\5\3\m\8\f\5\n\r\9\c\g\a\y\q\k\2\i\x\p\b\f\w\4\r\5\k\x\b\9\l\a\0\v\n\x\t\j\2\w\n\2\8\c\h\3\o\i\4\v\v\l\k\m\2\u\7\q\x\w\s\5\h\2\r\c\8\f\g\q\7\x\t\l\q\5\k\g\g\0\9\k\s\d\d\z\j\6\v\6\j\j\p\o\j\t\9\n\h\9\9\p\n\b\0\5\y\z\d\0\o\i\r\s\6\i\8\d\b\4\j\k\p\m\g\a\f\k\t\8\8\u\o\g\a\l\h\3\l\d\d\r\3\1\3\d\l\i\j\x\0\5\m\f\7\0\t\9\n\c\7\j\b\e\v\f\y\p\e\p\r\d\c\p\9\5\0\i\i\c\z\1\5\v\0\k\8\i\3\h\i ]] 00:10:47.732 05:21:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:47.990 05:21:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:10:47.990 [2024-11-20 05:21:02.295406] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:47.990 [2024-11-20 05:21:02.295505] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60658 ] 00:10:47.990 [2024-11-20 05:21:02.450094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.990 [2024-11-20 05:21:02.490692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.249 [2024-11-20 05:21:02.524373] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:48.249  [2024-11-20T05:21:02.762Z] Copying: 512/512 [B] (average 83 kBps) 00:10:48.249 00:10:48.249 05:21:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ vxxh9ivwwa15b011or0oaimh04hfg66okx3qa0xvwau1nxastv63g46nk8rt6odo7rc8memtse9oya0oiwacyud7osjgtblz4pyf375w2affp535n12yut22hxxuzgjwm35zjy0ks0ki2c3i7tlvtvzh4fpkv7qslqz5hbk2nflemajxepmcj2ovh3kr4wlf3p9gfrkp3mscxlvv1kqxrxwz33v3holj8hgoz4cxjuyqzwv1ku4mutx6u6p3fk46wrpamnc5dv8b8h16acusslkeylxlh6kv21r10yt9iv2x785iob3zh0f04xulvbyiof53m8f5nr9cgayqk2ixpbfw4r5kxb9la0vnxtj2wn28ch3oi4vvlkm2u7qxws5h2rc8fgq7xtlq5kgg09ksddzj6v6jjpojt9nh99pnb05yzd0oirs6i8db4jkpmgafkt88uogalh3lddr313dlijx05mf70t9nc7jbevfypeprdcp950iicz15v0k8i3hi == \v\x\x\h\9\i\v\w\w\a\1\5\b\0\1\1\o\r\0\o\a\i\m\h\0\4\h\f\g\6\6\o\k\x\3\q\a\0\x\v\w\a\u\1\n\x\a\s\t\v\6\3\g\4\6\n\k\8\r\t\6\o\d\o\7\r\c\8\m\e\m\t\s\e\9\o\y\a\0\o\i\w\a\c\y\u\d\7\o\s\j\g\t\b\l\z\4\p\y\f\3\7\5\w\2\a\f\f\p\5\3\5\n\1\2\y\u\t\2\2\h\x\x\u\z\g\j\w\m\3\5\z\j\y\0\k\s\0\k\i\2\c\3\i\7\t\l\v\t\v\z\h\4\f\p\k\v\7\q\s\l\q\z\5\h\b\k\2\n\f\l\e\m\a\j\x\e\p\m\c\j\2\o\v\h\3\k\r\4\w\l\f\3\p\9\g\f\r\k\p\3\m\s\c\x\l\v\v\1\k\q\x\r\x\w\z\3\3\v\3\h\o\l\j\8\h\g\o\z\4\c\x\j\u\y\q\z\w\v\1\k\u\4\m\u\t\x\6\u\6\p\3\f\k\4\6\w\r\p\a\m\n\c\5\d\v\8\b\8\h\1\6\a\c\u\s\s\l\k\e\y\l\x\l\h\6\k\v\2\1\r\1\0\y\t\9\i\v\2\x\7\8\5\i\o\b\3\z\h\0\f\0\4\x\u\l\v\b\y\i\o\f\5\3\m\8\f\5\n\r\9\c\g\a\y\q\k\2\i\x\p\b\f\w\4\r\5\k\x\b\9\l\a\0\v\n\x\t\j\2\w\n\2\8\c\h\3\o\i\4\v\v\l\k\m\2\u\7\q\x\w\s\5\h\2\r\c\8\f\g\q\7\x\t\l\q\5\k\g\g\0\9\k\s\d\d\z\j\6\v\6\j\j\p\o\j\t\9\n\h\9\9\p\n\b\0\5\y\z\d\0\o\i\r\s\6\i\8\d\b\4\j\k\p\m\g\a\f\k\t\8\8\u\o\g\a\l\h\3\l\d\d\r\3\1\3\d\l\i\j\x\0\5\m\f\7\0\t\9\n\c\7\j\b\e\v\f\y\p\e\p\r\d\c\p\9\5\0\i\i\c\z\1\5\v\0\k\8\i\3\h\i ]] 00:10:48.249 05:21:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:48.249 05:21:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:10:48.249 [2024-11-20 05:21:02.753277] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:48.249 [2024-11-20 05:21:02.753374] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60668 ] 00:10:48.509 [2024-11-20 05:21:02.907035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.509 [2024-11-20 05:21:02.950499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.509 [2024-11-20 05:21:02.987631] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:48.509  [2024-11-20T05:21:03.281Z] Copying: 512/512 [B] (average 250 kBps) 00:10:48.768 00:10:48.768 05:21:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ vxxh9ivwwa15b011or0oaimh04hfg66okx3qa0xvwau1nxastv63g46nk8rt6odo7rc8memtse9oya0oiwacyud7osjgtblz4pyf375w2affp535n12yut22hxxuzgjwm35zjy0ks0ki2c3i7tlvtvzh4fpkv7qslqz5hbk2nflemajxepmcj2ovh3kr4wlf3p9gfrkp3mscxlvv1kqxrxwz33v3holj8hgoz4cxjuyqzwv1ku4mutx6u6p3fk46wrpamnc5dv8b8h16acusslkeylxlh6kv21r10yt9iv2x785iob3zh0f04xulvbyiof53m8f5nr9cgayqk2ixpbfw4r5kxb9la0vnxtj2wn28ch3oi4vvlkm2u7qxws5h2rc8fgq7xtlq5kgg09ksddzj6v6jjpojt9nh99pnb05yzd0oirs6i8db4jkpmgafkt88uogalh3lddr313dlijx05mf70t9nc7jbevfypeprdcp950iicz15v0k8i3hi == \v\x\x\h\9\i\v\w\w\a\1\5\b\0\1\1\o\r\0\o\a\i\m\h\0\4\h\f\g\6\6\o\k\x\3\q\a\0\x\v\w\a\u\1\n\x\a\s\t\v\6\3\g\4\6\n\k\8\r\t\6\o\d\o\7\r\c\8\m\e\m\t\s\e\9\o\y\a\0\o\i\w\a\c\y\u\d\7\o\s\j\g\t\b\l\z\4\p\y\f\3\7\5\w\2\a\f\f\p\5\3\5\n\1\2\y\u\t\2\2\h\x\x\u\z\g\j\w\m\3\5\z\j\y\0\k\s\0\k\i\2\c\3\i\7\t\l\v\t\v\z\h\4\f\p\k\v\7\q\s\l\q\z\5\h\b\k\2\n\f\l\e\m\a\j\x\e\p\m\c\j\2\o\v\h\3\k\r\4\w\l\f\3\p\9\g\f\r\k\p\3\m\s\c\x\l\v\v\1\k\q\x\r\x\w\z\3\3\v\3\h\o\l\j\8\h\g\o\z\4\c\x\j\u\y\q\z\w\v\1\k\u\4\m\u\t\x\6\u\6\p\3\f\k\4\6\w\r\p\a\m\n\c\5\d\v\8\b\8\h\1\6\a\c\u\s\s\l\k\e\y\l\x\l\h\6\k\v\2\1\r\1\0\y\t\9\i\v\2\x\7\8\5\i\o\b\3\z\h\0\f\0\4\x\u\l\v\b\y\i\o\f\5\3\m\8\f\5\n\r\9\c\g\a\y\q\k\2\i\x\p\b\f\w\4\r\5\k\x\b\9\l\a\0\v\n\x\t\j\2\w\n\2\8\c\h\3\o\i\4\v\v\l\k\m\2\u\7\q\x\w\s\5\h\2\r\c\8\f\g\q\7\x\t\l\q\5\k\g\g\0\9\k\s\d\d\z\j\6\v\6\j\j\p\o\j\t\9\n\h\9\9\p\n\b\0\5\y\z\d\0\o\i\r\s\6\i\8\d\b\4\j\k\p\m\g\a\f\k\t\8\8\u\o\g\a\l\h\3\l\d\d\r\3\1\3\d\l\i\j\x\0\5\m\f\7\0\t\9\n\c\7\j\b\e\v\f\y\p\e\p\r\d\c\p\9\5\0\i\i\c\z\1\5\v\0\k\8\i\3\h\i ]] 00:10:48.768 05:21:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:10:48.768 05:21:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:10:48.768 05:21:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:10:48.768 05:21:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:10:48.768 05:21:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:48.768 05:21:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:10:48.768 [2024-11-20 05:21:03.226150] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:48.768 [2024-11-20 05:21:03.226274] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60677 ] 00:10:49.026 [2024-11-20 05:21:03.377497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.026 [2024-11-20 05:21:03.412588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.026 [2024-11-20 05:21:03.445334] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:49.026  [2024-11-20T05:21:03.797Z] Copying: 512/512 [B] (average 500 kBps) 00:10:49.284 00:10:49.284 05:21:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ gba5wapbb45zx3y9my00ik67ajvsjxq3t2zbckoe7pk0wddsh8n1z624ae6mybwpkv4ib2d761ma4wge9ppi8ogls13ajzi4is7fd24xd2ugf4uf50fxco9qzqrprriflqu0i2pihots7i1ju3duect1vdbh1hhb04ydtc9hai21prdewpbf326ns69hc0vrn0uflc4tc7a4wjqvtyev418m87ylod8zcolrl8bqn409hrwyp36kekatg2ekk0y4idpz7tz0jq1c1wcmx7n311uldjq202s55xk2r3bywnp3oiy8hafx1b7o4j5cvp7imm12fzh1r8q3azcaytyq1rxb96pnilsjffm60q5nkdqlwi59gwofv5nmgvg8110jbm20l7voi58dxzs2svwxu7xk3l8m9mwwywizxo6rd7a9ak9zzmkda020h270sorp1lub7f4c98z85lo6k10rz9ozpe49btj8e0wlhbcs3spv05anp1knf1h23j45owbo == \g\b\a\5\w\a\p\b\b\4\5\z\x\3\y\9\m\y\0\0\i\k\6\7\a\j\v\s\j\x\q\3\t\2\z\b\c\k\o\e\7\p\k\0\w\d\d\s\h\8\n\1\z\6\2\4\a\e\6\m\y\b\w\p\k\v\4\i\b\2\d\7\6\1\m\a\4\w\g\e\9\p\p\i\8\o\g\l\s\1\3\a\j\z\i\4\i\s\7\f\d\2\4\x\d\2\u\g\f\4\u\f\5\0\f\x\c\o\9\q\z\q\r\p\r\r\i\f\l\q\u\0\i\2\p\i\h\o\t\s\7\i\1\j\u\3\d\u\e\c\t\1\v\d\b\h\1\h\h\b\0\4\y\d\t\c\9\h\a\i\2\1\p\r\d\e\w\p\b\f\3\2\6\n\s\6\9\h\c\0\v\r\n\0\u\f\l\c\4\t\c\7\a\4\w\j\q\v\t\y\e\v\4\1\8\m\8\7\y\l\o\d\8\z\c\o\l\r\l\8\b\q\n\4\0\9\h\r\w\y\p\3\6\k\e\k\a\t\g\2\e\k\k\0\y\4\i\d\p\z\7\t\z\0\j\q\1\c\1\w\c\m\x\7\n\3\1\1\u\l\d\j\q\2\0\2\s\5\5\x\k\2\r\3\b\y\w\n\p\3\o\i\y\8\h\a\f\x\1\b\7\o\4\j\5\c\v\p\7\i\m\m\1\2\f\z\h\1\r\8\q\3\a\z\c\a\y\t\y\q\1\r\x\b\9\6\p\n\i\l\s\j\f\f\m\6\0\q\5\n\k\d\q\l\w\i\5\9\g\w\o\f\v\5\n\m\g\v\g\8\1\1\0\j\b\m\2\0\l\7\v\o\i\5\8\d\x\z\s\2\s\v\w\x\u\7\x\k\3\l\8\m\9\m\w\w\y\w\i\z\x\o\6\r\d\7\a\9\a\k\9\z\z\m\k\d\a\0\2\0\h\2\7\0\s\o\r\p\1\l\u\b\7\f\4\c\9\8\z\8\5\l\o\6\k\1\0\r\z\9\o\z\p\e\4\9\b\t\j\8\e\0\w\l\h\b\c\s\3\s\p\v\0\5\a\n\p\1\k\n\f\1\h\2\3\j\4\5\o\w\b\o ]] 00:10:49.284 05:21:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:49.284 05:21:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:10:49.284 [2024-11-20 05:21:03.664410] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:49.284 [2024-11-20 05:21:03.664533] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60687 ] 00:10:49.543 [2024-11-20 05:21:03.817592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.543 [2024-11-20 05:21:03.858931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.543 [2024-11-20 05:21:03.893881] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:49.543  [2024-11-20T05:21:04.056Z] Copying: 512/512 [B] (average 500 kBps) 00:10:49.543 00:10:49.802 05:21:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ gba5wapbb45zx3y9my00ik67ajvsjxq3t2zbckoe7pk0wddsh8n1z624ae6mybwpkv4ib2d761ma4wge9ppi8ogls13ajzi4is7fd24xd2ugf4uf50fxco9qzqrprriflqu0i2pihots7i1ju3duect1vdbh1hhb04ydtc9hai21prdewpbf326ns69hc0vrn0uflc4tc7a4wjqvtyev418m87ylod8zcolrl8bqn409hrwyp36kekatg2ekk0y4idpz7tz0jq1c1wcmx7n311uldjq202s55xk2r3bywnp3oiy8hafx1b7o4j5cvp7imm12fzh1r8q3azcaytyq1rxb96pnilsjffm60q5nkdqlwi59gwofv5nmgvg8110jbm20l7voi58dxzs2svwxu7xk3l8m9mwwywizxo6rd7a9ak9zzmkda020h270sorp1lub7f4c98z85lo6k10rz9ozpe49btj8e0wlhbcs3spv05anp1knf1h23j45owbo == \g\b\a\5\w\a\p\b\b\4\5\z\x\3\y\9\m\y\0\0\i\k\6\7\a\j\v\s\j\x\q\3\t\2\z\b\c\k\o\e\7\p\k\0\w\d\d\s\h\8\n\1\z\6\2\4\a\e\6\m\y\b\w\p\k\v\4\i\b\2\d\7\6\1\m\a\4\w\g\e\9\p\p\i\8\o\g\l\s\1\3\a\j\z\i\4\i\s\7\f\d\2\4\x\d\2\u\g\f\4\u\f\5\0\f\x\c\o\9\q\z\q\r\p\r\r\i\f\l\q\u\0\i\2\p\i\h\o\t\s\7\i\1\j\u\3\d\u\e\c\t\1\v\d\b\h\1\h\h\b\0\4\y\d\t\c\9\h\a\i\2\1\p\r\d\e\w\p\b\f\3\2\6\n\s\6\9\h\c\0\v\r\n\0\u\f\l\c\4\t\c\7\a\4\w\j\q\v\t\y\e\v\4\1\8\m\8\7\y\l\o\d\8\z\c\o\l\r\l\8\b\q\n\4\0\9\h\r\w\y\p\3\6\k\e\k\a\t\g\2\e\k\k\0\y\4\i\d\p\z\7\t\z\0\j\q\1\c\1\w\c\m\x\7\n\3\1\1\u\l\d\j\q\2\0\2\s\5\5\x\k\2\r\3\b\y\w\n\p\3\o\i\y\8\h\a\f\x\1\b\7\o\4\j\5\c\v\p\7\i\m\m\1\2\f\z\h\1\r\8\q\3\a\z\c\a\y\t\y\q\1\r\x\b\9\6\p\n\i\l\s\j\f\f\m\6\0\q\5\n\k\d\q\l\w\i\5\9\g\w\o\f\v\5\n\m\g\v\g\8\1\1\0\j\b\m\2\0\l\7\v\o\i\5\8\d\x\z\s\2\s\v\w\x\u\7\x\k\3\l\8\m\9\m\w\w\y\w\i\z\x\o\6\r\d\7\a\9\a\k\9\z\z\m\k\d\a\0\2\0\h\2\7\0\s\o\r\p\1\l\u\b\7\f\4\c\9\8\z\8\5\l\o\6\k\1\0\r\z\9\o\z\p\e\4\9\b\t\j\8\e\0\w\l\h\b\c\s\3\s\p\v\0\5\a\n\p\1\k\n\f\1\h\2\3\j\4\5\o\w\b\o ]] 00:10:49.802 05:21:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:49.802 05:21:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:10:49.802 [2024-11-20 05:21:04.106257] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:49.802 [2024-11-20 05:21:04.106579] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60696 ] 00:10:49.802 [2024-11-20 05:21:04.253684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.802 [2024-11-20 05:21:04.295499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.061 [2024-11-20 05:21:04.329481] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:50.061  [2024-11-20T05:21:04.574Z] Copying: 512/512 [B] (average 166 kBps) 00:10:50.061 00:10:50.061 05:21:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ gba5wapbb45zx3y9my00ik67ajvsjxq3t2zbckoe7pk0wddsh8n1z624ae6mybwpkv4ib2d761ma4wge9ppi8ogls13ajzi4is7fd24xd2ugf4uf50fxco9qzqrprriflqu0i2pihots7i1ju3duect1vdbh1hhb04ydtc9hai21prdewpbf326ns69hc0vrn0uflc4tc7a4wjqvtyev418m87ylod8zcolrl8bqn409hrwyp36kekatg2ekk0y4idpz7tz0jq1c1wcmx7n311uldjq202s55xk2r3bywnp3oiy8hafx1b7o4j5cvp7imm12fzh1r8q3azcaytyq1rxb96pnilsjffm60q5nkdqlwi59gwofv5nmgvg8110jbm20l7voi58dxzs2svwxu7xk3l8m9mwwywizxo6rd7a9ak9zzmkda020h270sorp1lub7f4c98z85lo6k10rz9ozpe49btj8e0wlhbcs3spv05anp1knf1h23j45owbo == \g\b\a\5\w\a\p\b\b\4\5\z\x\3\y\9\m\y\0\0\i\k\6\7\a\j\v\s\j\x\q\3\t\2\z\b\c\k\o\e\7\p\k\0\w\d\d\s\h\8\n\1\z\6\2\4\a\e\6\m\y\b\w\p\k\v\4\i\b\2\d\7\6\1\m\a\4\w\g\e\9\p\p\i\8\o\g\l\s\1\3\a\j\z\i\4\i\s\7\f\d\2\4\x\d\2\u\g\f\4\u\f\5\0\f\x\c\o\9\q\z\q\r\p\r\r\i\f\l\q\u\0\i\2\p\i\h\o\t\s\7\i\1\j\u\3\d\u\e\c\t\1\v\d\b\h\1\h\h\b\0\4\y\d\t\c\9\h\a\i\2\1\p\r\d\e\w\p\b\f\3\2\6\n\s\6\9\h\c\0\v\r\n\0\u\f\l\c\4\t\c\7\a\4\w\j\q\v\t\y\e\v\4\1\8\m\8\7\y\l\o\d\8\z\c\o\l\r\l\8\b\q\n\4\0\9\h\r\w\y\p\3\6\k\e\k\a\t\g\2\e\k\k\0\y\4\i\d\p\z\7\t\z\0\j\q\1\c\1\w\c\m\x\7\n\3\1\1\u\l\d\j\q\2\0\2\s\5\5\x\k\2\r\3\b\y\w\n\p\3\o\i\y\8\h\a\f\x\1\b\7\o\4\j\5\c\v\p\7\i\m\m\1\2\f\z\h\1\r\8\q\3\a\z\c\a\y\t\y\q\1\r\x\b\9\6\p\n\i\l\s\j\f\f\m\6\0\q\5\n\k\d\q\l\w\i\5\9\g\w\o\f\v\5\n\m\g\v\g\8\1\1\0\j\b\m\2\0\l\7\v\o\i\5\8\d\x\z\s\2\s\v\w\x\u\7\x\k\3\l\8\m\9\m\w\w\y\w\i\z\x\o\6\r\d\7\a\9\a\k\9\z\z\m\k\d\a\0\2\0\h\2\7\0\s\o\r\p\1\l\u\b\7\f\4\c\9\8\z\8\5\l\o\6\k\1\0\r\z\9\o\z\p\e\4\9\b\t\j\8\e\0\w\l\h\b\c\s\3\s\p\v\0\5\a\n\p\1\k\n\f\1\h\2\3\j\4\5\o\w\b\o ]] 00:10:50.061 05:21:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:50.061 05:21:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:10:50.061 [2024-11-20 05:21:04.540459] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:50.061 [2024-11-20 05:21:04.540714] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60709 ] 00:10:50.320 [2024-11-20 05:21:04.682369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.320 [2024-11-20 05:21:04.714369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.320 [2024-11-20 05:21:04.747141] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:50.320  [2024-11-20T05:21:05.093Z] Copying: 512/512 [B] (average 250 kBps) 00:10:50.580 00:10:50.580 05:21:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ gba5wapbb45zx3y9my00ik67ajvsjxq3t2zbckoe7pk0wddsh8n1z624ae6mybwpkv4ib2d761ma4wge9ppi8ogls13ajzi4is7fd24xd2ugf4uf50fxco9qzqrprriflqu0i2pihots7i1ju3duect1vdbh1hhb04ydtc9hai21prdewpbf326ns69hc0vrn0uflc4tc7a4wjqvtyev418m87ylod8zcolrl8bqn409hrwyp36kekatg2ekk0y4idpz7tz0jq1c1wcmx7n311uldjq202s55xk2r3bywnp3oiy8hafx1b7o4j5cvp7imm12fzh1r8q3azcaytyq1rxb96pnilsjffm60q5nkdqlwi59gwofv5nmgvg8110jbm20l7voi58dxzs2svwxu7xk3l8m9mwwywizxo6rd7a9ak9zzmkda020h270sorp1lub7f4c98z85lo6k10rz9ozpe49btj8e0wlhbcs3spv05anp1knf1h23j45owbo == \g\b\a\5\w\a\p\b\b\4\5\z\x\3\y\9\m\y\0\0\i\k\6\7\a\j\v\s\j\x\q\3\t\2\z\b\c\k\o\e\7\p\k\0\w\d\d\s\h\8\n\1\z\6\2\4\a\e\6\m\y\b\w\p\k\v\4\i\b\2\d\7\6\1\m\a\4\w\g\e\9\p\p\i\8\o\g\l\s\1\3\a\j\z\i\4\i\s\7\f\d\2\4\x\d\2\u\g\f\4\u\f\5\0\f\x\c\o\9\q\z\q\r\p\r\r\i\f\l\q\u\0\i\2\p\i\h\o\t\s\7\i\1\j\u\3\d\u\e\c\t\1\v\d\b\h\1\h\h\b\0\4\y\d\t\c\9\h\a\i\2\1\p\r\d\e\w\p\b\f\3\2\6\n\s\6\9\h\c\0\v\r\n\0\u\f\l\c\4\t\c\7\a\4\w\j\q\v\t\y\e\v\4\1\8\m\8\7\y\l\o\d\8\z\c\o\l\r\l\8\b\q\n\4\0\9\h\r\w\y\p\3\6\k\e\k\a\t\g\2\e\k\k\0\y\4\i\d\p\z\7\t\z\0\j\q\1\c\1\w\c\m\x\7\n\3\1\1\u\l\d\j\q\2\0\2\s\5\5\x\k\2\r\3\b\y\w\n\p\3\o\i\y\8\h\a\f\x\1\b\7\o\4\j\5\c\v\p\7\i\m\m\1\2\f\z\h\1\r\8\q\3\a\z\c\a\y\t\y\q\1\r\x\b\9\6\p\n\i\l\s\j\f\f\m\6\0\q\5\n\k\d\q\l\w\i\5\9\g\w\o\f\v\5\n\m\g\v\g\8\1\1\0\j\b\m\2\0\l\7\v\o\i\5\8\d\x\z\s\2\s\v\w\x\u\7\x\k\3\l\8\m\9\m\w\w\y\w\i\z\x\o\6\r\d\7\a\9\a\k\9\z\z\m\k\d\a\0\2\0\h\2\7\0\s\o\r\p\1\l\u\b\7\f\4\c\9\8\z\8\5\l\o\6\k\1\0\r\z\9\o\z\p\e\4\9\b\t\j\8\e\0\w\l\h\b\c\s\3\s\p\v\0\5\a\n\p\1\k\n\f\1\h\2\3\j\4\5\o\w\b\o ]] 00:10:50.580 00:10:50.580 real 0m3.481s 00:10:50.580 user 0m1.852s 00:10:50.580 sys 0m1.480s 00:10:50.580 05:21:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:50.580 ************************************ 00:10:50.580 END TEST dd_flags_misc 00:10:50.580 ************************************ 00:10:50.580 05:21:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:10:50.580 05:21:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:10:50.580 05:21:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:10:50.580 * Second test run, disabling liburing, forcing AIO 00:10:50.580 05:21:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:10:50.580 05:21:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:10:50.580 05:21:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:50.580 05:21:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:50.580 05:21:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:50.580 ************************************ 00:10:50.580 START TEST dd_flag_append_forced_aio 00:10:50.580 ************************************ 00:10:50.580 05:21:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1127 -- # append 00:10:50.580 05:21:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:10:50.580 05:21:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:10:50.580 05:21:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:10:50.580 05:21:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:50.580 05:21:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:50.580 05:21:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=33z8skz3n61fpxxv960s5bbn3e81kfcz 00:10:50.580 05:21:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:10:50.580 05:21:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:50.580 05:21:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:50.580 05:21:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=e7222t6djyzwiew15eitr6ndol4ne9el 00:10:50.580 05:21:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 33z8skz3n61fpxxv960s5bbn3e81kfcz 00:10:50.580 05:21:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s e7222t6djyzwiew15eitr6ndol4ne9el 00:10:50.580 05:21:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:10:50.580 [2024-11-20 05:21:05.015238] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:50.580 [2024-11-20 05:21:05.015330] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60732 ] 00:10:50.839 [2024-11-20 05:21:05.163680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.839 [2024-11-20 05:21:05.198863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.839 [2024-11-20 05:21:05.230635] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:50.839  [2024-11-20T05:21:05.610Z] Copying: 32/32 [B] (average 31 kBps) 00:10:51.097 00:10:51.097 ************************************ 00:10:51.097 END TEST dd_flag_append_forced_aio 00:10:51.097 ************************************ 00:10:51.097 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ e7222t6djyzwiew15eitr6ndol4ne9el33z8skz3n61fpxxv960s5bbn3e81kfcz == \e\7\2\2\2\t\6\d\j\y\z\w\i\e\w\1\5\e\i\t\r\6\n\d\o\l\4\n\e\9\e\l\3\3\z\8\s\k\z\3\n\6\1\f\p\x\x\v\9\6\0\s\5\b\b\n\3\e\8\1\k\f\c\z ]] 00:10:51.097 00:10:51.097 real 0m0.446s 00:10:51.097 user 0m0.240s 00:10:51.097 sys 0m0.086s 00:10:51.097 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:51.097 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:51.097 05:21:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:10:51.097 05:21:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:51.097 05:21:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:51.097 05:21:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:51.097 ************************************ 00:10:51.097 START TEST dd_flag_directory_forced_aio 00:10:51.097 ************************************ 00:10:51.097 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1127 -- # directory 00:10:51.097 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:51.097 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:10:51.097 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:51.097 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.097 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:51.097 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.097 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:51.097 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.097 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:51.097 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.097 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:51.097 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:51.097 [2024-11-20 05:21:05.510645] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:51.097 [2024-11-20 05:21:05.510746] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60764 ] 00:10:51.363 [2024-11-20 05:21:05.661202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.363 [2024-11-20 05:21:05.697784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.363 [2024-11-20 05:21:05.731129] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:51.363 [2024-11-20 05:21:05.752576] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:51.363 [2024-11-20 05:21:05.752865] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:51.363 [2024-11-20 05:21:05.752908] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:51.363 [2024-11-20 05:21:05.824882] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:51.634 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:10:51.634 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:51.634 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:10:51.634 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:10:51.634 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:10:51.634 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:51.634 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:51.634 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:10:51.634 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:51.634 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.634 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:51.634 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.634 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:51.634 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.634 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:51.634 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.634 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:51.634 05:21:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:51.634 [2024-11-20 05:21:05.953926] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:51.634 [2024-11-20 05:21:05.954078] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60768 ] 00:10:51.634 [2024-11-20 05:21:06.104379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.634 [2024-11-20 05:21:06.139948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.893 [2024-11-20 05:21:06.171491] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:51.893 [2024-11-20 05:21:06.192781] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:51.893 [2024-11-20 05:21:06.193086] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:51.893 [2024-11-20 05:21:06.193114] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:51.893 [2024-11-20 05:21:06.262909] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:51.893 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:10:51.893 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:51.893 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:10:51.893 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:10:51.893 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:10:51.893 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:51.893 00:10:51.893 real 0m0.871s 00:10:51.893 user 0m0.460s 00:10:51.893 sys 0m0.200s 00:10:51.893 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:51.893 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:51.893 ************************************ 00:10:51.893 END TEST dd_flag_directory_forced_aio 00:10:51.893 ************************************ 00:10:51.893 05:21:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:10:51.893 05:21:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:51.893 05:21:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:51.893 05:21:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:51.893 ************************************ 00:10:51.893 START TEST dd_flag_nofollow_forced_aio 00:10:51.893 ************************************ 00:10:51.893 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1127 -- # nofollow 00:10:51.893 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:10:51.893 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:10:51.893 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:10:51.893 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:10:51.893 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:51.893 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:10:51.893 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:51.893 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.893 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:51.893 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.893 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:51.893 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.893 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:51.893 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.893 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:51.893 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:52.152 [2024-11-20 05:21:06.433474] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:52.152 [2024-11-20 05:21:06.433586] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60796 ] 00:10:52.152 [2024-11-20 05:21:06.583616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.152 [2024-11-20 05:21:06.624197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.152 [2024-11-20 05:21:06.657506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:52.411 [2024-11-20 05:21:06.680537] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:10:52.411 [2024-11-20 05:21:06.680602] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:10:52.411 [2024-11-20 05:21:06.680626] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:52.411 [2024-11-20 05:21:06.752375] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:52.411 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:10:52.411 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:52.411 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:10:52.411 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:10:52.411 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:10:52.411 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:52.411 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:52.411 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:10:52.411 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:52.411 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:52.411 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:52.411 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:52.411 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:52.411 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:52.411 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:52.411 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:52.411 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:52.411 05:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:52.411 [2024-11-20 05:21:06.878808] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:52.411 [2024-11-20 05:21:06.878934] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60806 ] 00:10:52.670 [2024-11-20 05:21:07.028511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.670 [2024-11-20 05:21:07.068836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.670 [2024-11-20 05:21:07.102161] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:52.670 [2024-11-20 05:21:07.124176] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:10:52.670 [2024-11-20 05:21:07.124238] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:10:52.670 [2024-11-20 05:21:07.124264] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:52.928 [2024-11-20 05:21:07.193120] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:52.928 05:21:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:10:52.928 05:21:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:52.928 05:21:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:10:52.928 05:21:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:10:52.928 05:21:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:10:52.928 05:21:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:52.928 05:21:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:10:52.928 05:21:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:52.928 05:21:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:52.928 05:21:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:52.928 [2024-11-20 05:21:07.316614] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:52.928 [2024-11-20 05:21:07.316916] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60813 ] 00:10:53.187 [2024-11-20 05:21:07.465528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.187 [2024-11-20 05:21:07.500237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.187 [2024-11-20 05:21:07.531062] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:53.187  [2024-11-20T05:21:07.700Z] Copying: 512/512 [B] (average 500 kBps) 00:10:53.187 00:10:53.446 05:21:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ yqdxd16c9jb7ap36qy6revfug3oa8h5kndpfr7vm8q8z4xxg601dk9zplpwl66q6np5iczd634ua2nz4nalz7qnund9a32m3fm71a676egfjgkqrcsvt587zy9r66h5k4dxwshpful7c3a7aef3iq93qq0y0sesn4g7f8516r0v7tk2r833oz3gftn31wl0bfguzr55i3h0g8k1zwosibiwfgh4sduqgy52giibxk2ciqd9m22v0j3ofc8dmf3yytbqxos8xvmc6lfsj5qawvvry1ls0db8dts6katoyxv7ve8akyztdpll7uz85rrv9xi4se11bvdw372guc1crsxww7d0swbr5vwfidrqzesdx3lmn3b2hj8mq78eawbdfrpzaabzpk1d0r32u7ilamgobeldmonl0dl7l9avkcgu7qg97bnfb29kcdlb6ixncpv85z5qp7jnpqpj17sl56jrzrht3r7hquuxvopseoy33kdwh0kva45jw13cve00o == \y\q\d\x\d\1\6\c\9\j\b\7\a\p\3\6\q\y\6\r\e\v\f\u\g\3\o\a\8\h\5\k\n\d\p\f\r\7\v\m\8\q\8\z\4\x\x\g\6\0\1\d\k\9\z\p\l\p\w\l\6\6\q\6\n\p\5\i\c\z\d\6\3\4\u\a\2\n\z\4\n\a\l\z\7\q\n\u\n\d\9\a\3\2\m\3\f\m\7\1\a\6\7\6\e\g\f\j\g\k\q\r\c\s\v\t\5\8\7\z\y\9\r\6\6\h\5\k\4\d\x\w\s\h\p\f\u\l\7\c\3\a\7\a\e\f\3\i\q\9\3\q\q\0\y\0\s\e\s\n\4\g\7\f\8\5\1\6\r\0\v\7\t\k\2\r\8\3\3\o\z\3\g\f\t\n\3\1\w\l\0\b\f\g\u\z\r\5\5\i\3\h\0\g\8\k\1\z\w\o\s\i\b\i\w\f\g\h\4\s\d\u\q\g\y\5\2\g\i\i\b\x\k\2\c\i\q\d\9\m\2\2\v\0\j\3\o\f\c\8\d\m\f\3\y\y\t\b\q\x\o\s\8\x\v\m\c\6\l\f\s\j\5\q\a\w\v\v\r\y\1\l\s\0\d\b\8\d\t\s\6\k\a\t\o\y\x\v\7\v\e\8\a\k\y\z\t\d\p\l\l\7\u\z\8\5\r\r\v\9\x\i\4\s\e\1\1\b\v\d\w\3\7\2\g\u\c\1\c\r\s\x\w\w\7\d\0\s\w\b\r\5\v\w\f\i\d\r\q\z\e\s\d\x\3\l\m\n\3\b\2\h\j\8\m\q\7\8\e\a\w\b\d\f\r\p\z\a\a\b\z\p\k\1\d\0\r\3\2\u\7\i\l\a\m\g\o\b\e\l\d\m\o\n\l\0\d\l\7\l\9\a\v\k\c\g\u\7\q\g\9\7\b\n\f\b\2\9\k\c\d\l\b\6\i\x\n\c\p\v\8\5\z\5\q\p\7\j\n\p\q\p\j\1\7\s\l\5\6\j\r\z\r\h\t\3\r\7\h\q\u\u\x\v\o\p\s\e\o\y\3\3\k\d\w\h\0\k\v\a\4\5\j\w\1\3\c\v\e\0\0\o ]] 00:10:53.446 00:10:53.446 real 0m1.335s 00:10:53.446 user 0m0.707s 00:10:53.446 sys 0m0.296s 00:10:53.446 ************************************ 00:10:53.446 END TEST dd_flag_nofollow_forced_aio 00:10:53.446 ************************************ 00:10:53.446 05:21:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:53.446 05:21:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:53.446 05:21:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:10:53.446 05:21:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:53.446 05:21:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:53.446 05:21:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:53.446 ************************************ 00:10:53.446 START TEST dd_flag_noatime_forced_aio 00:10:53.446 ************************************ 00:10:53.446 05:21:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1127 -- # noatime 00:10:53.446 05:21:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:10:53.446 05:21:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:10:53.446 05:21:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:10:53.446 05:21:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:53.446 05:21:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:53.446 05:21:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:53.446 05:21:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1732080067 00:10:53.446 05:21:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:53.446 05:21:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1732080067 00:10:53.446 05:21:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:10:54.383 05:21:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:54.383 [2024-11-20 05:21:08.844513] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:54.383 [2024-11-20 05:21:08.844843] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60854 ] 00:10:54.641 [2024-11-20 05:21:08.999623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.641 [2024-11-20 05:21:09.062325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.641 [2024-11-20 05:21:09.094198] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:54.641  [2024-11-20T05:21:09.413Z] Copying: 512/512 [B] (average 500 kBps) 00:10:54.900 00:10:54.900 05:21:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:54.900 05:21:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1732080067 )) 00:10:54.900 05:21:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:54.900 05:21:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1732080067 )) 00:10:54.900 05:21:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:54.900 [2024-11-20 05:21:09.351373] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:54.900 [2024-11-20 05:21:09.351530] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60860 ] 00:10:55.159 [2024-11-20 05:21:09.505193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.159 [2024-11-20 05:21:09.546186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.159 [2024-11-20 05:21:09.579210] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:55.159  [2024-11-20T05:21:09.931Z] Copying: 512/512 [B] (average 500 kBps) 00:10:55.418 00:10:55.418 05:21:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:55.418 ************************************ 00:10:55.418 END TEST dd_flag_noatime_forced_aio 00:10:55.418 ************************************ 00:10:55.418 05:21:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1732080069 )) 00:10:55.418 00:10:55.418 real 0m2.011s 00:10:55.418 user 0m0.524s 00:10:55.418 sys 0m0.238s 00:10:55.418 05:21:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:55.418 05:21:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:55.418 05:21:09 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:10:55.418 05:21:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:55.418 05:21:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:55.418 05:21:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:55.418 ************************************ 00:10:55.418 START TEST dd_flags_misc_forced_aio 00:10:55.418 ************************************ 00:10:55.418 05:21:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1127 -- # io 00:10:55.418 05:21:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:10:55.418 05:21:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:10:55.418 05:21:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:10:55.418 05:21:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:10:55.418 05:21:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:10:55.418 05:21:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:55.418 05:21:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:55.418 05:21:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:55.418 05:21:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:10:55.418 [2024-11-20 05:21:09.885357] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:55.418 [2024-11-20 05:21:09.885471] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60892 ] 00:10:55.676 [2024-11-20 05:21:10.037371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.676 [2024-11-20 05:21:10.079228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.676 [2024-11-20 05:21:10.114394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:55.676  [2024-11-20T05:21:10.447Z] Copying: 512/512 [B] (average 500 kBps) 00:10:55.934 00:10:55.934 05:21:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ laxj2cz3ncfeq3p48pnggl2hwztt5o7pzeavf6x6c6l81dl146uywgmrlc9khy432x62l0dn6kn3xyw0bbev1ip35bilgsjvjxrnpa20ocghveudzf5wu2snlv9dd5azkx1euj68pd69jvl01qyp8tzt6eb3onwz6l1cv0ew6gip4knrwqwd68i2gjx72f38onxov6w29d75as75ma4w0jv935krp4fm55bbiq9kuvxn56lr33r3cbwceweqk7uz86z2zf4enfdwanzqzonkg9xd1qfo6f1d7gy1reft2et63uhlaqeyh8jld70nbm0snan9ifa2knt5fhecgc3i0dc7hn65u5uwoebcg4rmkmgfsf0avqf78bnth9f2ietu54yi9uobn9cjtexi0y3glei95lqo9dd8p9z36d3fheldg878hcev6wrzshgrhe4vy5fgvcr1w8aqkaaqr2vi2hth05fp782rkq7pmintd8qfeunvrjd3rkoublo27i30 == \l\a\x\j\2\c\z\3\n\c\f\e\q\3\p\4\8\p\n\g\g\l\2\h\w\z\t\t\5\o\7\p\z\e\a\v\f\6\x\6\c\6\l\8\1\d\l\1\4\6\u\y\w\g\m\r\l\c\9\k\h\y\4\3\2\x\6\2\l\0\d\n\6\k\n\3\x\y\w\0\b\b\e\v\1\i\p\3\5\b\i\l\g\s\j\v\j\x\r\n\p\a\2\0\o\c\g\h\v\e\u\d\z\f\5\w\u\2\s\n\l\v\9\d\d\5\a\z\k\x\1\e\u\j\6\8\p\d\6\9\j\v\l\0\1\q\y\p\8\t\z\t\6\e\b\3\o\n\w\z\6\l\1\c\v\0\e\w\6\g\i\p\4\k\n\r\w\q\w\d\6\8\i\2\g\j\x\7\2\f\3\8\o\n\x\o\v\6\w\2\9\d\7\5\a\s\7\5\m\a\4\w\0\j\v\9\3\5\k\r\p\4\f\m\5\5\b\b\i\q\9\k\u\v\x\n\5\6\l\r\3\3\r\3\c\b\w\c\e\w\e\q\k\7\u\z\8\6\z\2\z\f\4\e\n\f\d\w\a\n\z\q\z\o\n\k\g\9\x\d\1\q\f\o\6\f\1\d\7\g\y\1\r\e\f\t\2\e\t\6\3\u\h\l\a\q\e\y\h\8\j\l\d\7\0\n\b\m\0\s\n\a\n\9\i\f\a\2\k\n\t\5\f\h\e\c\g\c\3\i\0\d\c\7\h\n\6\5\u\5\u\w\o\e\b\c\g\4\r\m\k\m\g\f\s\f\0\a\v\q\f\7\8\b\n\t\h\9\f\2\i\e\t\u\5\4\y\i\9\u\o\b\n\9\c\j\t\e\x\i\0\y\3\g\l\e\i\9\5\l\q\o\9\d\d\8\p\9\z\3\6\d\3\f\h\e\l\d\g\8\7\8\h\c\e\v\6\w\r\z\s\h\g\r\h\e\4\v\y\5\f\g\v\c\r\1\w\8\a\q\k\a\a\q\r\2\v\i\2\h\t\h\0\5\f\p\7\8\2\r\k\q\7\p\m\i\n\t\d\8\q\f\e\u\n\v\r\j\d\3\r\k\o\u\b\l\o\2\7\i\3\0 ]] 00:10:55.934 05:21:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:55.934 05:21:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:10:55.934 [2024-11-20 05:21:10.373441] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:55.935 [2024-11-20 05:21:10.373593] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60894 ] 00:10:56.193 [2024-11-20 05:21:10.531922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.193 [2024-11-20 05:21:10.573269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.193 [2024-11-20 05:21:10.606965] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:56.193  [2024-11-20T05:21:10.965Z] Copying: 512/512 [B] (average 500 kBps) 00:10:56.452 00:10:56.453 05:21:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ laxj2cz3ncfeq3p48pnggl2hwztt5o7pzeavf6x6c6l81dl146uywgmrlc9khy432x62l0dn6kn3xyw0bbev1ip35bilgsjvjxrnpa20ocghveudzf5wu2snlv9dd5azkx1euj68pd69jvl01qyp8tzt6eb3onwz6l1cv0ew6gip4knrwqwd68i2gjx72f38onxov6w29d75as75ma4w0jv935krp4fm55bbiq9kuvxn56lr33r3cbwceweqk7uz86z2zf4enfdwanzqzonkg9xd1qfo6f1d7gy1reft2et63uhlaqeyh8jld70nbm0snan9ifa2knt5fhecgc3i0dc7hn65u5uwoebcg4rmkmgfsf0avqf78bnth9f2ietu54yi9uobn9cjtexi0y3glei95lqo9dd8p9z36d3fheldg878hcev6wrzshgrhe4vy5fgvcr1w8aqkaaqr2vi2hth05fp782rkq7pmintd8qfeunvrjd3rkoublo27i30 == \l\a\x\j\2\c\z\3\n\c\f\e\q\3\p\4\8\p\n\g\g\l\2\h\w\z\t\t\5\o\7\p\z\e\a\v\f\6\x\6\c\6\l\8\1\d\l\1\4\6\u\y\w\g\m\r\l\c\9\k\h\y\4\3\2\x\6\2\l\0\d\n\6\k\n\3\x\y\w\0\b\b\e\v\1\i\p\3\5\b\i\l\g\s\j\v\j\x\r\n\p\a\2\0\o\c\g\h\v\e\u\d\z\f\5\w\u\2\s\n\l\v\9\d\d\5\a\z\k\x\1\e\u\j\6\8\p\d\6\9\j\v\l\0\1\q\y\p\8\t\z\t\6\e\b\3\o\n\w\z\6\l\1\c\v\0\e\w\6\g\i\p\4\k\n\r\w\q\w\d\6\8\i\2\g\j\x\7\2\f\3\8\o\n\x\o\v\6\w\2\9\d\7\5\a\s\7\5\m\a\4\w\0\j\v\9\3\5\k\r\p\4\f\m\5\5\b\b\i\q\9\k\u\v\x\n\5\6\l\r\3\3\r\3\c\b\w\c\e\w\e\q\k\7\u\z\8\6\z\2\z\f\4\e\n\f\d\w\a\n\z\q\z\o\n\k\g\9\x\d\1\q\f\o\6\f\1\d\7\g\y\1\r\e\f\t\2\e\t\6\3\u\h\l\a\q\e\y\h\8\j\l\d\7\0\n\b\m\0\s\n\a\n\9\i\f\a\2\k\n\t\5\f\h\e\c\g\c\3\i\0\d\c\7\h\n\6\5\u\5\u\w\o\e\b\c\g\4\r\m\k\m\g\f\s\f\0\a\v\q\f\7\8\b\n\t\h\9\f\2\i\e\t\u\5\4\y\i\9\u\o\b\n\9\c\j\t\e\x\i\0\y\3\g\l\e\i\9\5\l\q\o\9\d\d\8\p\9\z\3\6\d\3\f\h\e\l\d\g\8\7\8\h\c\e\v\6\w\r\z\s\h\g\r\h\e\4\v\y\5\f\g\v\c\r\1\w\8\a\q\k\a\a\q\r\2\v\i\2\h\t\h\0\5\f\p\7\8\2\r\k\q\7\p\m\i\n\t\d\8\q\f\e\u\n\v\r\j\d\3\r\k\o\u\b\l\o\2\7\i\3\0 ]] 00:10:56.453 05:21:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:56.453 05:21:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:10:56.453 [2024-11-20 05:21:10.828240] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:56.453 [2024-11-20 05:21:10.828333] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60907 ] 00:10:56.711 [2024-11-20 05:21:10.975997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.711 [2024-11-20 05:21:11.009709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.711 [2024-11-20 05:21:11.040811] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:56.711  [2024-11-20T05:21:11.224Z] Copying: 512/512 [B] (average 83 kBps) 00:10:56.711 00:10:56.711 05:21:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ laxj2cz3ncfeq3p48pnggl2hwztt5o7pzeavf6x6c6l81dl146uywgmrlc9khy432x62l0dn6kn3xyw0bbev1ip35bilgsjvjxrnpa20ocghveudzf5wu2snlv9dd5azkx1euj68pd69jvl01qyp8tzt6eb3onwz6l1cv0ew6gip4knrwqwd68i2gjx72f38onxov6w29d75as75ma4w0jv935krp4fm55bbiq9kuvxn56lr33r3cbwceweqk7uz86z2zf4enfdwanzqzonkg9xd1qfo6f1d7gy1reft2et63uhlaqeyh8jld70nbm0snan9ifa2knt5fhecgc3i0dc7hn65u5uwoebcg4rmkmgfsf0avqf78bnth9f2ietu54yi9uobn9cjtexi0y3glei95lqo9dd8p9z36d3fheldg878hcev6wrzshgrhe4vy5fgvcr1w8aqkaaqr2vi2hth05fp782rkq7pmintd8qfeunvrjd3rkoublo27i30 == \l\a\x\j\2\c\z\3\n\c\f\e\q\3\p\4\8\p\n\g\g\l\2\h\w\z\t\t\5\o\7\p\z\e\a\v\f\6\x\6\c\6\l\8\1\d\l\1\4\6\u\y\w\g\m\r\l\c\9\k\h\y\4\3\2\x\6\2\l\0\d\n\6\k\n\3\x\y\w\0\b\b\e\v\1\i\p\3\5\b\i\l\g\s\j\v\j\x\r\n\p\a\2\0\o\c\g\h\v\e\u\d\z\f\5\w\u\2\s\n\l\v\9\d\d\5\a\z\k\x\1\e\u\j\6\8\p\d\6\9\j\v\l\0\1\q\y\p\8\t\z\t\6\e\b\3\o\n\w\z\6\l\1\c\v\0\e\w\6\g\i\p\4\k\n\r\w\q\w\d\6\8\i\2\g\j\x\7\2\f\3\8\o\n\x\o\v\6\w\2\9\d\7\5\a\s\7\5\m\a\4\w\0\j\v\9\3\5\k\r\p\4\f\m\5\5\b\b\i\q\9\k\u\v\x\n\5\6\l\r\3\3\r\3\c\b\w\c\e\w\e\q\k\7\u\z\8\6\z\2\z\f\4\e\n\f\d\w\a\n\z\q\z\o\n\k\g\9\x\d\1\q\f\o\6\f\1\d\7\g\y\1\r\e\f\t\2\e\t\6\3\u\h\l\a\q\e\y\h\8\j\l\d\7\0\n\b\m\0\s\n\a\n\9\i\f\a\2\k\n\t\5\f\h\e\c\g\c\3\i\0\d\c\7\h\n\6\5\u\5\u\w\o\e\b\c\g\4\r\m\k\m\g\f\s\f\0\a\v\q\f\7\8\b\n\t\h\9\f\2\i\e\t\u\5\4\y\i\9\u\o\b\n\9\c\j\t\e\x\i\0\y\3\g\l\e\i\9\5\l\q\o\9\d\d\8\p\9\z\3\6\d\3\f\h\e\l\d\g\8\7\8\h\c\e\v\6\w\r\z\s\h\g\r\h\e\4\v\y\5\f\g\v\c\r\1\w\8\a\q\k\a\a\q\r\2\v\i\2\h\t\h\0\5\f\p\7\8\2\r\k\q\7\p\m\i\n\t\d\8\q\f\e\u\n\v\r\j\d\3\r\k\o\u\b\l\o\2\7\i\3\0 ]] 00:10:56.711 05:21:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:56.711 05:21:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:10:56.971 [2024-11-20 05:21:11.271387] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:56.971 [2024-11-20 05:21:11.271488] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60909 ] 00:10:56.971 [2024-11-20 05:21:11.416373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.971 [2024-11-20 05:21:11.458435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.230 [2024-11-20 05:21:11.489673] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:57.230  [2024-11-20T05:21:11.743Z] Copying: 512/512 [B] (average 500 kBps) 00:10:57.230 00:10:57.230 05:21:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ laxj2cz3ncfeq3p48pnggl2hwztt5o7pzeavf6x6c6l81dl146uywgmrlc9khy432x62l0dn6kn3xyw0bbev1ip35bilgsjvjxrnpa20ocghveudzf5wu2snlv9dd5azkx1euj68pd69jvl01qyp8tzt6eb3onwz6l1cv0ew6gip4knrwqwd68i2gjx72f38onxov6w29d75as75ma4w0jv935krp4fm55bbiq9kuvxn56lr33r3cbwceweqk7uz86z2zf4enfdwanzqzonkg9xd1qfo6f1d7gy1reft2et63uhlaqeyh8jld70nbm0snan9ifa2knt5fhecgc3i0dc7hn65u5uwoebcg4rmkmgfsf0avqf78bnth9f2ietu54yi9uobn9cjtexi0y3glei95lqo9dd8p9z36d3fheldg878hcev6wrzshgrhe4vy5fgvcr1w8aqkaaqr2vi2hth05fp782rkq7pmintd8qfeunvrjd3rkoublo27i30 == \l\a\x\j\2\c\z\3\n\c\f\e\q\3\p\4\8\p\n\g\g\l\2\h\w\z\t\t\5\o\7\p\z\e\a\v\f\6\x\6\c\6\l\8\1\d\l\1\4\6\u\y\w\g\m\r\l\c\9\k\h\y\4\3\2\x\6\2\l\0\d\n\6\k\n\3\x\y\w\0\b\b\e\v\1\i\p\3\5\b\i\l\g\s\j\v\j\x\r\n\p\a\2\0\o\c\g\h\v\e\u\d\z\f\5\w\u\2\s\n\l\v\9\d\d\5\a\z\k\x\1\e\u\j\6\8\p\d\6\9\j\v\l\0\1\q\y\p\8\t\z\t\6\e\b\3\o\n\w\z\6\l\1\c\v\0\e\w\6\g\i\p\4\k\n\r\w\q\w\d\6\8\i\2\g\j\x\7\2\f\3\8\o\n\x\o\v\6\w\2\9\d\7\5\a\s\7\5\m\a\4\w\0\j\v\9\3\5\k\r\p\4\f\m\5\5\b\b\i\q\9\k\u\v\x\n\5\6\l\r\3\3\r\3\c\b\w\c\e\w\e\q\k\7\u\z\8\6\z\2\z\f\4\e\n\f\d\w\a\n\z\q\z\o\n\k\g\9\x\d\1\q\f\o\6\f\1\d\7\g\y\1\r\e\f\t\2\e\t\6\3\u\h\l\a\q\e\y\h\8\j\l\d\7\0\n\b\m\0\s\n\a\n\9\i\f\a\2\k\n\t\5\f\h\e\c\g\c\3\i\0\d\c\7\h\n\6\5\u\5\u\w\o\e\b\c\g\4\r\m\k\m\g\f\s\f\0\a\v\q\f\7\8\b\n\t\h\9\f\2\i\e\t\u\5\4\y\i\9\u\o\b\n\9\c\j\t\e\x\i\0\y\3\g\l\e\i\9\5\l\q\o\9\d\d\8\p\9\z\3\6\d\3\f\h\e\l\d\g\8\7\8\h\c\e\v\6\w\r\z\s\h\g\r\h\e\4\v\y\5\f\g\v\c\r\1\w\8\a\q\k\a\a\q\r\2\v\i\2\h\t\h\0\5\f\p\7\8\2\r\k\q\7\p\m\i\n\t\d\8\q\f\e\u\n\v\r\j\d\3\r\k\o\u\b\l\o\2\7\i\3\0 ]] 00:10:57.230 05:21:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:10:57.230 05:21:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:10:57.230 05:21:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:57.230 05:21:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:57.230 05:21:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:57.230 05:21:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:10:57.230 [2024-11-20 05:21:11.729732] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:57.230 [2024-11-20 05:21:11.730016] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60922 ] 00:10:57.488 [2024-11-20 05:21:11.882090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.488 [2024-11-20 05:21:11.922488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.488 [2024-11-20 05:21:11.956387] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:57.488  [2024-11-20T05:21:12.260Z] Copying: 512/512 [B] (average 500 kBps) 00:10:57.747 00:10:57.747 05:21:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 4yscy5urs69nnh7lnljkcwfxvw8zxawt4fginojegzv137lsk7rte39e6z2elojfr9tm9i7fy2fb8yv8b3y0asworu4rzrnyxapveu71iwtrex7xkzh1f0ehmdk0pqwmt0sw99x8qygdp2lvu5c70ayg6budg6rzs3364cze2m6kz663mxplubguiiokpmqjs96ht99v4o8j5pclo1d4aj61ncwvpoqy7e1jlltd6kwigaal9jzgdz7jlwdcy0ucx11u9zb5yemdqcadwkjcaws669jpek2gl5b0nxg1xuxcthtlcfa49zrwa1su1fne5nkmp3jq8up2yb8mz05b2uxi73mknpuypl45niy94ij6hrm1g1aby8kd964n4mot0hctlynsthcv0qyllo8bsoco70ynxfqcvoer7qkhoabg5dwvnrpc0c4gvja3641bvna2wjb5aovk90g1s0idkvu9g0v8ez3l3ovzdfn1vw4wum2pgvyprody1x4k7eyg == \4\y\s\c\y\5\u\r\s\6\9\n\n\h\7\l\n\l\j\k\c\w\f\x\v\w\8\z\x\a\w\t\4\f\g\i\n\o\j\e\g\z\v\1\3\7\l\s\k\7\r\t\e\3\9\e\6\z\2\e\l\o\j\f\r\9\t\m\9\i\7\f\y\2\f\b\8\y\v\8\b\3\y\0\a\s\w\o\r\u\4\r\z\r\n\y\x\a\p\v\e\u\7\1\i\w\t\r\e\x\7\x\k\z\h\1\f\0\e\h\m\d\k\0\p\q\w\m\t\0\s\w\9\9\x\8\q\y\g\d\p\2\l\v\u\5\c\7\0\a\y\g\6\b\u\d\g\6\r\z\s\3\3\6\4\c\z\e\2\m\6\k\z\6\6\3\m\x\p\l\u\b\g\u\i\i\o\k\p\m\q\j\s\9\6\h\t\9\9\v\4\o\8\j\5\p\c\l\o\1\d\4\a\j\6\1\n\c\w\v\p\o\q\y\7\e\1\j\l\l\t\d\6\k\w\i\g\a\a\l\9\j\z\g\d\z\7\j\l\w\d\c\y\0\u\c\x\1\1\u\9\z\b\5\y\e\m\d\q\c\a\d\w\k\j\c\a\w\s\6\6\9\j\p\e\k\2\g\l\5\b\0\n\x\g\1\x\u\x\c\t\h\t\l\c\f\a\4\9\z\r\w\a\1\s\u\1\f\n\e\5\n\k\m\p\3\j\q\8\u\p\2\y\b\8\m\z\0\5\b\2\u\x\i\7\3\m\k\n\p\u\y\p\l\4\5\n\i\y\9\4\i\j\6\h\r\m\1\g\1\a\b\y\8\k\d\9\6\4\n\4\m\o\t\0\h\c\t\l\y\n\s\t\h\c\v\0\q\y\l\l\o\8\b\s\o\c\o\7\0\y\n\x\f\q\c\v\o\e\r\7\q\k\h\o\a\b\g\5\d\w\v\n\r\p\c\0\c\4\g\v\j\a\3\6\4\1\b\v\n\a\2\w\j\b\5\a\o\v\k\9\0\g\1\s\0\i\d\k\v\u\9\g\0\v\8\e\z\3\l\3\o\v\z\d\f\n\1\v\w\4\w\u\m\2\p\g\v\y\p\r\o\d\y\1\x\4\k\7\e\y\g ]] 00:10:57.747 05:21:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:57.747 05:21:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:10:57.747 [2024-11-20 05:21:12.187679] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:57.747 [2024-11-20 05:21:12.187950] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60924 ] 00:10:58.006 [2024-11-20 05:21:12.338238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.006 [2024-11-20 05:21:12.379431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.006 [2024-11-20 05:21:12.412929] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:58.006  [2024-11-20T05:21:12.778Z] Copying: 512/512 [B] (average 500 kBps) 00:10:58.265 00:10:58.265 05:21:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 4yscy5urs69nnh7lnljkcwfxvw8zxawt4fginojegzv137lsk7rte39e6z2elojfr9tm9i7fy2fb8yv8b3y0asworu4rzrnyxapveu71iwtrex7xkzh1f0ehmdk0pqwmt0sw99x8qygdp2lvu5c70ayg6budg6rzs3364cze2m6kz663mxplubguiiokpmqjs96ht99v4o8j5pclo1d4aj61ncwvpoqy7e1jlltd6kwigaal9jzgdz7jlwdcy0ucx11u9zb5yemdqcadwkjcaws669jpek2gl5b0nxg1xuxcthtlcfa49zrwa1su1fne5nkmp3jq8up2yb8mz05b2uxi73mknpuypl45niy94ij6hrm1g1aby8kd964n4mot0hctlynsthcv0qyllo8bsoco70ynxfqcvoer7qkhoabg5dwvnrpc0c4gvja3641bvna2wjb5aovk90g1s0idkvu9g0v8ez3l3ovzdfn1vw4wum2pgvyprody1x4k7eyg == \4\y\s\c\y\5\u\r\s\6\9\n\n\h\7\l\n\l\j\k\c\w\f\x\v\w\8\z\x\a\w\t\4\f\g\i\n\o\j\e\g\z\v\1\3\7\l\s\k\7\r\t\e\3\9\e\6\z\2\e\l\o\j\f\r\9\t\m\9\i\7\f\y\2\f\b\8\y\v\8\b\3\y\0\a\s\w\o\r\u\4\r\z\r\n\y\x\a\p\v\e\u\7\1\i\w\t\r\e\x\7\x\k\z\h\1\f\0\e\h\m\d\k\0\p\q\w\m\t\0\s\w\9\9\x\8\q\y\g\d\p\2\l\v\u\5\c\7\0\a\y\g\6\b\u\d\g\6\r\z\s\3\3\6\4\c\z\e\2\m\6\k\z\6\6\3\m\x\p\l\u\b\g\u\i\i\o\k\p\m\q\j\s\9\6\h\t\9\9\v\4\o\8\j\5\p\c\l\o\1\d\4\a\j\6\1\n\c\w\v\p\o\q\y\7\e\1\j\l\l\t\d\6\k\w\i\g\a\a\l\9\j\z\g\d\z\7\j\l\w\d\c\y\0\u\c\x\1\1\u\9\z\b\5\y\e\m\d\q\c\a\d\w\k\j\c\a\w\s\6\6\9\j\p\e\k\2\g\l\5\b\0\n\x\g\1\x\u\x\c\t\h\t\l\c\f\a\4\9\z\r\w\a\1\s\u\1\f\n\e\5\n\k\m\p\3\j\q\8\u\p\2\y\b\8\m\z\0\5\b\2\u\x\i\7\3\m\k\n\p\u\y\p\l\4\5\n\i\y\9\4\i\j\6\h\r\m\1\g\1\a\b\y\8\k\d\9\6\4\n\4\m\o\t\0\h\c\t\l\y\n\s\t\h\c\v\0\q\y\l\l\o\8\b\s\o\c\o\7\0\y\n\x\f\q\c\v\o\e\r\7\q\k\h\o\a\b\g\5\d\w\v\n\r\p\c\0\c\4\g\v\j\a\3\6\4\1\b\v\n\a\2\w\j\b\5\a\o\v\k\9\0\g\1\s\0\i\d\k\v\u\9\g\0\v\8\e\z\3\l\3\o\v\z\d\f\n\1\v\w\4\w\u\m\2\p\g\v\y\p\r\o\d\y\1\x\4\k\7\e\y\g ]] 00:10:58.265 05:21:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:58.265 05:21:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:10:58.265 [2024-11-20 05:21:12.649762] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:58.265 [2024-11-20 05:21:12.650036] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60937 ] 00:10:58.525 [2024-11-20 05:21:12.803299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.525 [2024-11-20 05:21:12.842372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.525 [2024-11-20 05:21:12.874227] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:58.525  [2024-11-20T05:21:13.297Z] Copying: 512/512 [B] (average 500 kBps) 00:10:58.784 00:10:58.784 05:21:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 4yscy5urs69nnh7lnljkcwfxvw8zxawt4fginojegzv137lsk7rte39e6z2elojfr9tm9i7fy2fb8yv8b3y0asworu4rzrnyxapveu71iwtrex7xkzh1f0ehmdk0pqwmt0sw99x8qygdp2lvu5c70ayg6budg6rzs3364cze2m6kz663mxplubguiiokpmqjs96ht99v4o8j5pclo1d4aj61ncwvpoqy7e1jlltd6kwigaal9jzgdz7jlwdcy0ucx11u9zb5yemdqcadwkjcaws669jpek2gl5b0nxg1xuxcthtlcfa49zrwa1su1fne5nkmp3jq8up2yb8mz05b2uxi73mknpuypl45niy94ij6hrm1g1aby8kd964n4mot0hctlynsthcv0qyllo8bsoco70ynxfqcvoer7qkhoabg5dwvnrpc0c4gvja3641bvna2wjb5aovk90g1s0idkvu9g0v8ez3l3ovzdfn1vw4wum2pgvyprody1x4k7eyg == \4\y\s\c\y\5\u\r\s\6\9\n\n\h\7\l\n\l\j\k\c\w\f\x\v\w\8\z\x\a\w\t\4\f\g\i\n\o\j\e\g\z\v\1\3\7\l\s\k\7\r\t\e\3\9\e\6\z\2\e\l\o\j\f\r\9\t\m\9\i\7\f\y\2\f\b\8\y\v\8\b\3\y\0\a\s\w\o\r\u\4\r\z\r\n\y\x\a\p\v\e\u\7\1\i\w\t\r\e\x\7\x\k\z\h\1\f\0\e\h\m\d\k\0\p\q\w\m\t\0\s\w\9\9\x\8\q\y\g\d\p\2\l\v\u\5\c\7\0\a\y\g\6\b\u\d\g\6\r\z\s\3\3\6\4\c\z\e\2\m\6\k\z\6\6\3\m\x\p\l\u\b\g\u\i\i\o\k\p\m\q\j\s\9\6\h\t\9\9\v\4\o\8\j\5\p\c\l\o\1\d\4\a\j\6\1\n\c\w\v\p\o\q\y\7\e\1\j\l\l\t\d\6\k\w\i\g\a\a\l\9\j\z\g\d\z\7\j\l\w\d\c\y\0\u\c\x\1\1\u\9\z\b\5\y\e\m\d\q\c\a\d\w\k\j\c\a\w\s\6\6\9\j\p\e\k\2\g\l\5\b\0\n\x\g\1\x\u\x\c\t\h\t\l\c\f\a\4\9\z\r\w\a\1\s\u\1\f\n\e\5\n\k\m\p\3\j\q\8\u\p\2\y\b\8\m\z\0\5\b\2\u\x\i\7\3\m\k\n\p\u\y\p\l\4\5\n\i\y\9\4\i\j\6\h\r\m\1\g\1\a\b\y\8\k\d\9\6\4\n\4\m\o\t\0\h\c\t\l\y\n\s\t\h\c\v\0\q\y\l\l\o\8\b\s\o\c\o\7\0\y\n\x\f\q\c\v\o\e\r\7\q\k\h\o\a\b\g\5\d\w\v\n\r\p\c\0\c\4\g\v\j\a\3\6\4\1\b\v\n\a\2\w\j\b\5\a\o\v\k\9\0\g\1\s\0\i\d\k\v\u\9\g\0\v\8\e\z\3\l\3\o\v\z\d\f\n\1\v\w\4\w\u\m\2\p\g\v\y\p\r\o\d\y\1\x\4\k\7\e\y\g ]] 00:10:58.784 05:21:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:58.784 05:21:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:10:58.784 [2024-11-20 05:21:13.112067] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:58.784 [2024-11-20 05:21:13.112160] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60939 ] 00:10:58.784 [2024-11-20 05:21:13.265016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.044 [2024-11-20 05:21:13.307276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.044 [2024-11-20 05:21:13.340668] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:59.044  [2024-11-20T05:21:13.557Z] Copying: 512/512 [B] (average 500 kBps) 00:10:59.044 00:10:59.044 ************************************ 00:10:59.044 END TEST dd_flags_misc_forced_aio 00:10:59.044 ************************************ 00:10:59.044 05:21:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 4yscy5urs69nnh7lnljkcwfxvw8zxawt4fginojegzv137lsk7rte39e6z2elojfr9tm9i7fy2fb8yv8b3y0asworu4rzrnyxapveu71iwtrex7xkzh1f0ehmdk0pqwmt0sw99x8qygdp2lvu5c70ayg6budg6rzs3364cze2m6kz663mxplubguiiokpmqjs96ht99v4o8j5pclo1d4aj61ncwvpoqy7e1jlltd6kwigaal9jzgdz7jlwdcy0ucx11u9zb5yemdqcadwkjcaws669jpek2gl5b0nxg1xuxcthtlcfa49zrwa1su1fne5nkmp3jq8up2yb8mz05b2uxi73mknpuypl45niy94ij6hrm1g1aby8kd964n4mot0hctlynsthcv0qyllo8bsoco70ynxfqcvoer7qkhoabg5dwvnrpc0c4gvja3641bvna2wjb5aovk90g1s0idkvu9g0v8ez3l3ovzdfn1vw4wum2pgvyprody1x4k7eyg == \4\y\s\c\y\5\u\r\s\6\9\n\n\h\7\l\n\l\j\k\c\w\f\x\v\w\8\z\x\a\w\t\4\f\g\i\n\o\j\e\g\z\v\1\3\7\l\s\k\7\r\t\e\3\9\e\6\z\2\e\l\o\j\f\r\9\t\m\9\i\7\f\y\2\f\b\8\y\v\8\b\3\y\0\a\s\w\o\r\u\4\r\z\r\n\y\x\a\p\v\e\u\7\1\i\w\t\r\e\x\7\x\k\z\h\1\f\0\e\h\m\d\k\0\p\q\w\m\t\0\s\w\9\9\x\8\q\y\g\d\p\2\l\v\u\5\c\7\0\a\y\g\6\b\u\d\g\6\r\z\s\3\3\6\4\c\z\e\2\m\6\k\z\6\6\3\m\x\p\l\u\b\g\u\i\i\o\k\p\m\q\j\s\9\6\h\t\9\9\v\4\o\8\j\5\p\c\l\o\1\d\4\a\j\6\1\n\c\w\v\p\o\q\y\7\e\1\j\l\l\t\d\6\k\w\i\g\a\a\l\9\j\z\g\d\z\7\j\l\w\d\c\y\0\u\c\x\1\1\u\9\z\b\5\y\e\m\d\q\c\a\d\w\k\j\c\a\w\s\6\6\9\j\p\e\k\2\g\l\5\b\0\n\x\g\1\x\u\x\c\t\h\t\l\c\f\a\4\9\z\r\w\a\1\s\u\1\f\n\e\5\n\k\m\p\3\j\q\8\u\p\2\y\b\8\m\z\0\5\b\2\u\x\i\7\3\m\k\n\p\u\y\p\l\4\5\n\i\y\9\4\i\j\6\h\r\m\1\g\1\a\b\y\8\k\d\9\6\4\n\4\m\o\t\0\h\c\t\l\y\n\s\t\h\c\v\0\q\y\l\l\o\8\b\s\o\c\o\7\0\y\n\x\f\q\c\v\o\e\r\7\q\k\h\o\a\b\g\5\d\w\v\n\r\p\c\0\c\4\g\v\j\a\3\6\4\1\b\v\n\a\2\w\j\b\5\a\o\v\k\9\0\g\1\s\0\i\d\k\v\u\9\g\0\v\8\e\z\3\l\3\o\v\z\d\f\n\1\v\w\4\w\u\m\2\p\g\v\y\p\r\o\d\y\1\x\4\k\7\e\y\g ]] 00:10:59.044 00:10:59.044 real 0m3.702s 00:10:59.044 user 0m1.943s 00:10:59.044 sys 0m0.772s 00:10:59.044 05:21:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:59.044 05:21:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:59.303 05:21:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:10:59.303 05:21:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:10:59.303 05:21:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:10:59.303 ************************************ 00:10:59.303 END TEST spdk_dd_posix 00:10:59.303 ************************************ 00:10:59.303 00:10:59.303 real 0m17.208s 00:10:59.303 user 0m7.965s 00:10:59.303 sys 0m4.668s 00:10:59.303 05:21:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:59.303 05:21:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:59.303 05:21:13 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:10:59.303 05:21:13 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:59.303 05:21:13 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:59.303 05:21:13 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:59.303 ************************************ 00:10:59.303 START TEST spdk_dd_malloc 00:10:59.303 ************************************ 00:10:59.303 05:21:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:10:59.303 * Looking for test storage... 00:10:59.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:59.303 05:21:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:59.303 05:21:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # lcov --version 00:10:59.303 05:21:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:59.303 05:21:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:59.303 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.303 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.303 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.303 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.303 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.303 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.303 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.303 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.303 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.303 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.303 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.303 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:10:59.303 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:10:59.303 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.303 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.303 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:59.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.563 --rc genhtml_branch_coverage=1 00:10:59.563 --rc genhtml_function_coverage=1 00:10:59.563 --rc genhtml_legend=1 00:10:59.563 --rc geninfo_all_blocks=1 00:10:59.563 --rc geninfo_unexecuted_blocks=1 00:10:59.563 00:10:59.563 ' 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:59.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.563 --rc genhtml_branch_coverage=1 00:10:59.563 --rc genhtml_function_coverage=1 00:10:59.563 --rc genhtml_legend=1 00:10:59.563 --rc geninfo_all_blocks=1 00:10:59.563 --rc geninfo_unexecuted_blocks=1 00:10:59.563 00:10:59.563 ' 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:59.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.563 --rc genhtml_branch_coverage=1 00:10:59.563 --rc genhtml_function_coverage=1 00:10:59.563 --rc genhtml_legend=1 00:10:59.563 --rc geninfo_all_blocks=1 00:10:59.563 --rc geninfo_unexecuted_blocks=1 00:10:59.563 00:10:59.563 ' 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:59.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.563 --rc genhtml_branch_coverage=1 00:10:59.563 --rc genhtml_function_coverage=1 00:10:59.563 --rc genhtml_legend=1 00:10:59.563 --rc geninfo_all_blocks=1 00:10:59.563 --rc geninfo_unexecuted_blocks=1 00:10:59.563 00:10:59.563 ' 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:10:59.563 ************************************ 00:10:59.563 START TEST dd_malloc_copy 00:10:59.563 ************************************ 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1127 -- # malloc_copy 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:59.563 05:21:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:10:59.563 [2024-11-20 05:21:13.905129] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:10:59.563 [2024-11-20 05:21:13.905427] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61021 ] 00:10:59.563 { 00:10:59.563 "subsystems": [ 00:10:59.563 { 00:10:59.563 "subsystem": "bdev", 00:10:59.563 "config": [ 00:10:59.563 { 00:10:59.563 "params": { 00:10:59.563 "block_size": 512, 00:10:59.563 "num_blocks": 1048576, 00:10:59.563 "name": "malloc0" 00:10:59.563 }, 00:10:59.563 "method": "bdev_malloc_create" 00:10:59.563 }, 00:10:59.563 { 00:10:59.563 "params": { 00:10:59.563 "block_size": 512, 00:10:59.563 "num_blocks": 1048576, 00:10:59.563 "name": "malloc1" 00:10:59.563 }, 00:10:59.563 "method": "bdev_malloc_create" 00:10:59.563 }, 00:10:59.563 { 00:10:59.563 "method": "bdev_wait_for_examine" 00:10:59.563 } 00:10:59.563 ] 00:10:59.563 } 00:10:59.563 ] 00:10:59.563 } 00:10:59.563 [2024-11-20 05:21:14.062640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.823 [2024-11-20 05:21:14.102324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.823 [2024-11-20 05:21:14.136080] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:01.200  [2024-11-20T05:21:16.649Z] Copying: 190/512 [MB] (190 MBps) [2024-11-20T05:21:17.217Z] Copying: 380/512 [MB] (190 MBps) [2024-11-20T05:21:17.476Z] Copying: 512/512 [MB] (average 189 MBps) 00:11:02.963 00:11:02.963 05:21:17 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:11:02.963 05:21:17 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:11:02.963 05:21:17 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:02.963 05:21:17 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:11:02.963 { 00:11:02.963 "subsystems": [ 00:11:02.963 { 00:11:02.963 "subsystem": "bdev", 00:11:02.963 "config": [ 00:11:02.963 { 00:11:02.963 "params": { 00:11:02.963 "block_size": 512, 00:11:02.963 "num_blocks": 1048576, 00:11:02.963 "name": "malloc0" 00:11:02.963 }, 00:11:02.963 "method": "bdev_malloc_create" 00:11:02.963 }, 00:11:02.963 { 00:11:02.963 "params": { 00:11:02.963 "block_size": 512, 00:11:02.963 "num_blocks": 1048576, 00:11:02.963 "name": "malloc1" 00:11:02.963 }, 00:11:02.963 "method": "bdev_malloc_create" 00:11:02.963 }, 00:11:02.963 { 00:11:02.963 "method": "bdev_wait_for_examine" 00:11:02.963 } 00:11:02.963 ] 00:11:02.963 } 00:11:02.963 ] 00:11:02.963 } 00:11:02.963 [2024-11-20 05:21:17.438806] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:02.963 [2024-11-20 05:21:17.439160] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61063 ] 00:11:03.223 [2024-11-20 05:21:17.586619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.223 [2024-11-20 05:21:17.622525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.223 [2024-11-20 05:21:17.654072] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:04.601  [2024-11-20T05:21:20.049Z] Copying: 185/512 [MB] (185 MBps) [2024-11-20T05:21:20.666Z] Copying: 376/512 [MB] (190 MBps) [2024-11-20T05:21:21.235Z] Copying: 512/512 [MB] (average 188 MBps) 00:11:06.722 00:11:06.722 00:11:06.722 real 0m7.086s 00:11:06.722 user 0m6.371s 00:11:06.722 sys 0m0.529s 00:11:06.722 05:21:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:06.722 05:21:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:11:06.722 ************************************ 00:11:06.722 END TEST dd_malloc_copy 00:11:06.722 ************************************ 00:11:06.722 ************************************ 00:11:06.722 END TEST spdk_dd_malloc 00:11:06.722 ************************************ 00:11:06.722 00:11:06.722 real 0m7.350s 00:11:06.722 user 0m6.538s 00:11:06.722 sys 0m0.629s 00:11:06.722 05:21:20 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:06.722 05:21:20 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:11:06.722 05:21:21 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:11:06.722 05:21:21 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:06.722 05:21:21 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:06.722 05:21:21 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:06.722 ************************************ 00:11:06.722 START TEST spdk_dd_bdev_to_bdev 00:11:06.722 ************************************ 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:11:06.722 * Looking for test storage... 00:11:06.722 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # lcov --version 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:06.722 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:06.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.723 --rc genhtml_branch_coverage=1 00:11:06.723 --rc genhtml_function_coverage=1 00:11:06.723 --rc genhtml_legend=1 00:11:06.723 --rc geninfo_all_blocks=1 00:11:06.723 --rc geninfo_unexecuted_blocks=1 00:11:06.723 00:11:06.723 ' 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:06.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.723 --rc genhtml_branch_coverage=1 00:11:06.723 --rc genhtml_function_coverage=1 00:11:06.723 --rc genhtml_legend=1 00:11:06.723 --rc geninfo_all_blocks=1 00:11:06.723 --rc geninfo_unexecuted_blocks=1 00:11:06.723 00:11:06.723 ' 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:06.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.723 --rc genhtml_branch_coverage=1 00:11:06.723 --rc genhtml_function_coverage=1 00:11:06.723 --rc genhtml_legend=1 00:11:06.723 --rc geninfo_all_blocks=1 00:11:06.723 --rc geninfo_unexecuted_blocks=1 00:11:06.723 00:11:06.723 ' 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:06.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.723 --rc genhtml_branch_coverage=1 00:11:06.723 --rc genhtml_function_coverage=1 00:11:06.723 --rc genhtml_legend=1 00:11:06.723 --rc geninfo_all_blocks=1 00:11:06.723 --rc geninfo_unexecuted_blocks=1 00:11:06.723 00:11:06.723 ' 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:06.723 ************************************ 00:11:06.723 START TEST dd_inflate_file 00:11:06.723 ************************************ 00:11:06.723 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:11:06.982 [2024-11-20 05:21:21.268350] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:06.982 [2024-11-20 05:21:21.268613] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61181 ] 00:11:06.982 [2024-11-20 05:21:21.417938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.982 [2024-11-20 05:21:21.459460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.982 [2024-11-20 05:21:21.493847] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:07.241  [2024-11-20T05:21:21.754Z] Copying: 64/64 [MB] (average 1684 MBps) 00:11:07.242 00:11:07.242 00:11:07.242 real 0m0.479s 00:11:07.242 user 0m0.273s 00:11:07.242 sys 0m0.226s 00:11:07.242 ************************************ 00:11:07.242 END TEST dd_inflate_file 00:11:07.242 ************************************ 00:11:07.242 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:07.242 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:11:07.242 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:11:07.242 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:11:07.242 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:11:07.242 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:11:07.242 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:11:07.242 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:07.242 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:07.242 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:07.242 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:07.501 ************************************ 00:11:07.501 START TEST dd_copy_to_out_bdev 00:11:07.501 ************************************ 00:11:07.501 05:21:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:11:07.501 { 00:11:07.501 "subsystems": [ 00:11:07.501 { 00:11:07.501 "subsystem": "bdev", 00:11:07.501 "config": [ 00:11:07.501 { 00:11:07.501 "params": { 00:11:07.501 "trtype": "pcie", 00:11:07.501 "traddr": "0000:00:10.0", 00:11:07.501 "name": "Nvme0" 00:11:07.501 }, 00:11:07.501 "method": "bdev_nvme_attach_controller" 00:11:07.501 }, 00:11:07.501 { 00:11:07.501 "params": { 00:11:07.501 "trtype": "pcie", 00:11:07.501 "traddr": "0000:00:11.0", 00:11:07.501 "name": "Nvme1" 00:11:07.501 }, 00:11:07.501 "method": "bdev_nvme_attach_controller" 00:11:07.501 }, 00:11:07.501 { 00:11:07.501 "method": "bdev_wait_for_examine" 00:11:07.501 } 00:11:07.501 ] 00:11:07.501 } 00:11:07.501 ] 00:11:07.501 } 00:11:07.501 [2024-11-20 05:21:21.811100] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:07.501 [2024-11-20 05:21:21.811209] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61209 ] 00:11:07.501 [2024-11-20 05:21:21.964683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.501 [2024-11-20 05:21:22.006571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.760 [2024-11-20 05:21:22.041527] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:08.696  [2024-11-20T05:21:23.468Z] Copying: 64/64 [MB] (average 70 MBps) 00:11:08.955 00:11:08.955 00:11:08.955 real 0m1.539s 00:11:08.955 user 0m1.342s 00:11:08.955 sys 0m1.186s 00:11:08.955 05:21:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:08.955 ************************************ 00:11:08.955 05:21:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:08.955 END TEST dd_copy_to_out_bdev 00:11:08.955 ************************************ 00:11:08.955 05:21:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:11:08.955 05:21:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:11:08.955 05:21:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:08.955 05:21:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:08.955 05:21:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:08.955 ************************************ 00:11:08.955 START TEST dd_offset_magic 00:11:08.955 ************************************ 00:11:08.955 05:21:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1127 -- # offset_magic 00:11:08.955 05:21:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:11:08.955 05:21:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:11:08.955 05:21:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:11:08.955 05:21:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:11:08.955 05:21:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:11:08.955 05:21:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:11:08.955 05:21:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:08.955 05:21:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:08.955 { 00:11:08.955 "subsystems": [ 00:11:08.955 { 00:11:08.955 "subsystem": "bdev", 00:11:08.955 "config": [ 00:11:08.955 { 00:11:08.955 "params": { 00:11:08.955 "trtype": "pcie", 00:11:08.955 "traddr": "0000:00:10.0", 00:11:08.955 "name": "Nvme0" 00:11:08.955 }, 00:11:08.955 "method": "bdev_nvme_attach_controller" 00:11:08.955 }, 00:11:08.956 { 00:11:08.956 "params": { 00:11:08.956 "trtype": "pcie", 00:11:08.956 "traddr": "0000:00:11.0", 00:11:08.956 "name": "Nvme1" 00:11:08.956 }, 00:11:08.956 "method": "bdev_nvme_attach_controller" 00:11:08.956 }, 00:11:08.956 { 00:11:08.956 "method": "bdev_wait_for_examine" 00:11:08.956 } 00:11:08.956 ] 00:11:08.956 } 00:11:08.956 ] 00:11:08.956 } 00:11:08.956 [2024-11-20 05:21:23.424403] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:08.956 [2024-11-20 05:21:23.424555] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61254 ] 00:11:09.214 [2024-11-20 05:21:23.588309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.214 [2024-11-20 05:21:23.628347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.214 [2024-11-20 05:21:23.662341] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:09.473  [2024-11-20T05:21:24.244Z] Copying: 65/65 [MB] (average 1226 MBps) 00:11:09.731 00:11:09.731 05:21:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:11:09.731 05:21:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:11:09.731 05:21:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:09.731 05:21:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:09.731 { 00:11:09.731 "subsystems": [ 00:11:09.731 { 00:11:09.731 "subsystem": "bdev", 00:11:09.731 "config": [ 00:11:09.731 { 00:11:09.731 "params": { 00:11:09.731 "trtype": "pcie", 00:11:09.731 "traddr": "0000:00:10.0", 00:11:09.731 "name": "Nvme0" 00:11:09.731 }, 00:11:09.731 "method": "bdev_nvme_attach_controller" 00:11:09.731 }, 00:11:09.731 { 00:11:09.731 "params": { 00:11:09.732 "trtype": "pcie", 00:11:09.732 "traddr": "0000:00:11.0", 00:11:09.732 "name": "Nvme1" 00:11:09.732 }, 00:11:09.732 "method": "bdev_nvme_attach_controller" 00:11:09.732 }, 00:11:09.732 { 00:11:09.732 "method": "bdev_wait_for_examine" 00:11:09.732 } 00:11:09.732 ] 00:11:09.732 } 00:11:09.732 ] 00:11:09.732 } 00:11:09.732 [2024-11-20 05:21:24.135871] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:09.732 [2024-11-20 05:21:24.136020] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61269 ] 00:11:09.990 [2024-11-20 05:21:24.283565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.990 [2024-11-20 05:21:24.319136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.990 [2024-11-20 05:21:24.350410] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:10.249  [2024-11-20T05:21:24.762Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:11:10.249 00:11:10.249 05:21:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:11:10.249 05:21:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:11:10.249 05:21:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:11:10.249 05:21:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:11:10.249 05:21:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:11:10.249 05:21:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:10.249 05:21:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:10.249 [2024-11-20 05:21:24.700058] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:10.249 [2024-11-20 05:21:24.700160] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61285 ] 00:11:10.249 { 00:11:10.249 "subsystems": [ 00:11:10.249 { 00:11:10.249 "subsystem": "bdev", 00:11:10.249 "config": [ 00:11:10.249 { 00:11:10.249 "params": { 00:11:10.249 "trtype": "pcie", 00:11:10.249 "traddr": "0000:00:10.0", 00:11:10.249 "name": "Nvme0" 00:11:10.249 }, 00:11:10.249 "method": "bdev_nvme_attach_controller" 00:11:10.249 }, 00:11:10.249 { 00:11:10.249 "params": { 00:11:10.249 "trtype": "pcie", 00:11:10.249 "traddr": "0000:00:11.0", 00:11:10.249 "name": "Nvme1" 00:11:10.249 }, 00:11:10.249 "method": "bdev_nvme_attach_controller" 00:11:10.249 }, 00:11:10.249 { 00:11:10.249 "method": "bdev_wait_for_examine" 00:11:10.249 } 00:11:10.249 ] 00:11:10.249 } 00:11:10.249 ] 00:11:10.249 } 00:11:10.508 [2024-11-20 05:21:24.851324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.508 [2024-11-20 05:21:24.887386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.508 [2024-11-20 05:21:24.920486] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:10.767  [2024-11-20T05:21:25.539Z] Copying: 65/65 [MB] (average 1354 MBps) 00:11:11.026 00:11:11.026 05:21:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:11:11.026 05:21:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:11:11.026 05:21:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:11.026 05:21:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:11.026 [2024-11-20 05:21:25.367548] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:11.026 [2024-11-20 05:21:25.367642] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61305 ] 00:11:11.026 { 00:11:11.026 "subsystems": [ 00:11:11.026 { 00:11:11.026 "subsystem": "bdev", 00:11:11.026 "config": [ 00:11:11.026 { 00:11:11.026 "params": { 00:11:11.026 "trtype": "pcie", 00:11:11.026 "traddr": "0000:00:10.0", 00:11:11.026 "name": "Nvme0" 00:11:11.026 }, 00:11:11.026 "method": "bdev_nvme_attach_controller" 00:11:11.026 }, 00:11:11.026 { 00:11:11.026 "params": { 00:11:11.026 "trtype": "pcie", 00:11:11.026 "traddr": "0000:00:11.0", 00:11:11.026 "name": "Nvme1" 00:11:11.026 }, 00:11:11.026 "method": "bdev_nvme_attach_controller" 00:11:11.026 }, 00:11:11.026 { 00:11:11.026 "method": "bdev_wait_for_examine" 00:11:11.026 } 00:11:11.026 ] 00:11:11.026 } 00:11:11.026 ] 00:11:11.026 } 00:11:11.026 [2024-11-20 05:21:25.517061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.285 [2024-11-20 05:21:25.550269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.285 [2024-11-20 05:21:25.580741] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:11.285  [2024-11-20T05:21:26.057Z] Copying: 1024/1024 [kB] (average 333 MBps) 00:11:11.544 00:11:11.544 05:21:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:11:11.544 05:21:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:11:11.544 00:11:11.544 real 0m2.544s 00:11:11.544 user 0m1.861s 00:11:11.544 sys 0m0.678s 00:11:11.544 ************************************ 00:11:11.544 END TEST dd_offset_magic 00:11:11.544 ************************************ 00:11:11.544 05:21:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:11.544 05:21:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:11.544 05:21:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:11:11.544 05:21:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:11:11.544 05:21:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:11.544 05:21:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:11:11.544 05:21:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:11:11.544 05:21:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:11:11.544 05:21:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:11:11.544 05:21:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:11:11.544 05:21:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:11:11.544 05:21:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:11.544 05:21:25 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:11.544 [2024-11-20 05:21:25.982079] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:11.544 [2024-11-20 05:21:25.982181] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61337 ] 00:11:11.544 { 00:11:11.544 "subsystems": [ 00:11:11.544 { 00:11:11.544 "subsystem": "bdev", 00:11:11.544 "config": [ 00:11:11.544 { 00:11:11.544 "params": { 00:11:11.544 "trtype": "pcie", 00:11:11.544 "traddr": "0000:00:10.0", 00:11:11.544 "name": "Nvme0" 00:11:11.544 }, 00:11:11.544 "method": "bdev_nvme_attach_controller" 00:11:11.544 }, 00:11:11.544 { 00:11:11.544 "params": { 00:11:11.544 "trtype": "pcie", 00:11:11.544 "traddr": "0000:00:11.0", 00:11:11.544 "name": "Nvme1" 00:11:11.544 }, 00:11:11.544 "method": "bdev_nvme_attach_controller" 00:11:11.544 }, 00:11:11.544 { 00:11:11.544 "method": "bdev_wait_for_examine" 00:11:11.544 } 00:11:11.544 ] 00:11:11.544 } 00:11:11.544 ] 00:11:11.544 } 00:11:11.803 [2024-11-20 05:21:26.135867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.803 [2024-11-20 05:21:26.175960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.803 [2024-11-20 05:21:26.212621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:12.073  [2024-11-20T05:21:26.586Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:11:12.073 00:11:12.073 05:21:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:11:12.073 05:21:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:11:12.073 05:21:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:11:12.073 05:21:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:11:12.073 05:21:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:11:12.073 05:21:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:11:12.073 05:21:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:11:12.073 05:21:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:11:12.073 05:21:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:12.073 05:21:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:12.073 [2024-11-20 05:21:26.577055] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:12.073 [2024-11-20 05:21:26.577157] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61352 ] 00:11:12.331 { 00:11:12.331 "subsystems": [ 00:11:12.331 { 00:11:12.331 "subsystem": "bdev", 00:11:12.331 "config": [ 00:11:12.331 { 00:11:12.331 "params": { 00:11:12.331 "trtype": "pcie", 00:11:12.331 "traddr": "0000:00:10.0", 00:11:12.331 "name": "Nvme0" 00:11:12.331 }, 00:11:12.331 "method": "bdev_nvme_attach_controller" 00:11:12.331 }, 00:11:12.331 { 00:11:12.331 "params": { 00:11:12.331 "trtype": "pcie", 00:11:12.331 "traddr": "0000:00:11.0", 00:11:12.331 "name": "Nvme1" 00:11:12.331 }, 00:11:12.331 "method": "bdev_nvme_attach_controller" 00:11:12.331 }, 00:11:12.331 { 00:11:12.331 "method": "bdev_wait_for_examine" 00:11:12.331 } 00:11:12.331 ] 00:11:12.331 } 00:11:12.331 ] 00:11:12.331 } 00:11:12.331 [2024-11-20 05:21:26.724339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.331 [2024-11-20 05:21:26.757470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.331 [2024-11-20 05:21:26.787059] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:12.590  [2024-11-20T05:21:27.103Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:11:12.590 00:11:12.590 05:21:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:11:12.849 ************************************ 00:11:12.849 END TEST spdk_dd_bdev_to_bdev 00:11:12.849 ************************************ 00:11:12.849 00:11:12.849 real 0m6.095s 00:11:12.849 user 0m4.487s 00:11:12.849 sys 0m2.647s 00:11:12.849 05:21:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:12.849 05:21:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:12.849 05:21:27 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:11:12.849 05:21:27 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:11:12.849 05:21:27 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:12.849 05:21:27 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:12.849 05:21:27 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:12.849 ************************************ 00:11:12.849 START TEST spdk_dd_uring 00:11:12.849 ************************************ 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:11:12.849 * Looking for test storage... 00:11:12.849 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # lcov --version 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:12.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.849 --rc genhtml_branch_coverage=1 00:11:12.849 --rc genhtml_function_coverage=1 00:11:12.849 --rc genhtml_legend=1 00:11:12.849 --rc geninfo_all_blocks=1 00:11:12.849 --rc geninfo_unexecuted_blocks=1 00:11:12.849 00:11:12.849 ' 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:12.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.849 --rc genhtml_branch_coverage=1 00:11:12.849 --rc genhtml_function_coverage=1 00:11:12.849 --rc genhtml_legend=1 00:11:12.849 --rc geninfo_all_blocks=1 00:11:12.849 --rc geninfo_unexecuted_blocks=1 00:11:12.849 00:11:12.849 ' 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:12.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.849 --rc genhtml_branch_coverage=1 00:11:12.849 --rc genhtml_function_coverage=1 00:11:12.849 --rc genhtml_legend=1 00:11:12.849 --rc geninfo_all_blocks=1 00:11:12.849 --rc geninfo_unexecuted_blocks=1 00:11:12.849 00:11:12.849 ' 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:12.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.849 --rc genhtml_branch_coverage=1 00:11:12.849 --rc genhtml_function_coverage=1 00:11:12.849 --rc genhtml_legend=1 00:11:12.849 --rc geninfo_all_blocks=1 00:11:12.849 --rc geninfo_unexecuted_blocks=1 00:11:12.849 00:11:12.849 ' 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:11:12.849 ************************************ 00:11:12.849 START TEST dd_uring_copy 00:11:12.849 ************************************ 00:11:12.849 05:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1127 -- # uring_zram_copy 00:11:12.850 05:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:11:12.850 05:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:11:12.850 05:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:11:12.850 05:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:11:12.850 05:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:11:12.850 05:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:11:12.850 05:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:11:12.850 05:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:11:12.850 05:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:11:12.850 05:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:11:13.108 05:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:11:13.108 05:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:11:13.108 05:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:11:13.108 05:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:11:13.108 05:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:11:13.108 05:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:11:13.108 05:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:11:13.108 05:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:11:13.108 05:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:11:13.108 05:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:11:13.108 05:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:11:13.108 05:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:11:13.108 05:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:11:13.108 05:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:11:13.108 05:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:13.108 05:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=46au101u1zp10ze4yqm9b2uah7rk3rfriq3c48r3l4m2nj3qjivk49i4z82w0kl40s7b5729xwzzwwijge2qeksotxi6mfkbk8r2tg2m2fjnpvbhns3rwckhjjbpj4hw4po66g84f3tgsiqi6r84n6kx3gst5pxl9mcnq477f9rtas4gyo74vx0b6mdm7ruq3qpkwriekurp3rq58lnyg0z85i15bfuhtrdijcqvvx2umcnuu2v3xof94sp8nmndy66sx2oxe7x9ut8kgpm7k7qjq7pwc9iedhb10zi794cwhhzwwsj4zjjvyucp3o2ihx8fik4fiv7c5pkmulqp0ig24r58jmzlb4i9u48pgzgtosv2ywds834mjli81okgxt6d0n0afpa05gj2891iqubhhcxczbtfu1k25icegaetnzgullgpn28b04ad76275lweu4kpc53dz6durrlym18mnr9d7jlk6yk0mo73oawwwta04qnpsi37rogkhzd053yr8wbyztfzt6ufy5hprmj47tirri6ukm9e6g71kmx275nz37z1bnb2qwilxoro8ir9uipe2kr86xp713tgi08xkqvhdrvf51vwjba8ipzyt00hvnf4hib1y4t83gb5sxo29uuh7tdycoui0w30ze0z5rjhmnhgcrrjmrohgr5c73vfu3g20thw0q8n9l863r628u83fciok5sbyt3n5nlr4pnbrd84ku9z67aa7ivn93vtqd17dfgoyapwomv2i2khnhkj395xfzw1kzaarsq8brmyfop3uctp8p827pxnuh9yetlwd07w1fu1i69vqd3h3nwxzn0p5ws7qiadnm4gpr77ipbwd6wiq50eazc4gzufufuos7e6ks2i504mb2wz5qs0urcuyb72wn9d9esqtn5256z5hf013elihqbswjqgs3iy71w5oclffytx98l4xxw9c3pfa2xwgb09dm2w64jnjt8500cbts9c5szcp6ty1olkph3ap1cbbq1e 00:11:13.108 05:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 46au101u1zp10ze4yqm9b2uah7rk3rfriq3c48r3l4m2nj3qjivk49i4z82w0kl40s7b5729xwzzwwijge2qeksotxi6mfkbk8r2tg2m2fjnpvbhns3rwckhjjbpj4hw4po66g84f3tgsiqi6r84n6kx3gst5pxl9mcnq477f9rtas4gyo74vx0b6mdm7ruq3qpkwriekurp3rq58lnyg0z85i15bfuhtrdijcqvvx2umcnuu2v3xof94sp8nmndy66sx2oxe7x9ut8kgpm7k7qjq7pwc9iedhb10zi794cwhhzwwsj4zjjvyucp3o2ihx8fik4fiv7c5pkmulqp0ig24r58jmzlb4i9u48pgzgtosv2ywds834mjli81okgxt6d0n0afpa05gj2891iqubhhcxczbtfu1k25icegaetnzgullgpn28b04ad76275lweu4kpc53dz6durrlym18mnr9d7jlk6yk0mo73oawwwta04qnpsi37rogkhzd053yr8wbyztfzt6ufy5hprmj47tirri6ukm9e6g71kmx275nz37z1bnb2qwilxoro8ir9uipe2kr86xp713tgi08xkqvhdrvf51vwjba8ipzyt00hvnf4hib1y4t83gb5sxo29uuh7tdycoui0w30ze0z5rjhmnhgcrrjmrohgr5c73vfu3g20thw0q8n9l863r628u83fciok5sbyt3n5nlr4pnbrd84ku9z67aa7ivn93vtqd17dfgoyapwomv2i2khnhkj395xfzw1kzaarsq8brmyfop3uctp8p827pxnuh9yetlwd07w1fu1i69vqd3h3nwxzn0p5ws7qiadnm4gpr77ipbwd6wiq50eazc4gzufufuos7e6ks2i504mb2wz5qs0urcuyb72wn9d9esqtn5256z5hf013elihqbswjqgs3iy71w5oclffytx98l4xxw9c3pfa2xwgb09dm2w64jnjt8500cbts9c5szcp6ty1olkph3ap1cbbq1e 00:11:13.108 05:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:11:13.108 [2024-11-20 05:21:27.440430] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:13.108 [2024-11-20 05:21:27.440543] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61430 ] 00:11:13.108 [2024-11-20 05:21:27.589844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.366 [2024-11-20 05:21:27.632667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.366 [2024-11-20 05:21:27.669363] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:13.932  [2024-11-20T05:21:28.445Z] Copying: 511/511 [MB] (average 1402 MBps) 00:11:13.932 00:11:13.932 05:21:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:11:13.932 05:21:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:11:13.932 05:21:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:13.932 05:21:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:14.191 [2024-11-20 05:21:28.492175] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:14.191 [2024-11-20 05:21:28.492273] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61446 ] 00:11:14.191 { 00:11:14.191 "subsystems": [ 00:11:14.191 { 00:11:14.191 "subsystem": "bdev", 00:11:14.191 "config": [ 00:11:14.191 { 00:11:14.191 "params": { 00:11:14.191 "block_size": 512, 00:11:14.191 "num_blocks": 1048576, 00:11:14.191 "name": "malloc0" 00:11:14.191 }, 00:11:14.191 "method": "bdev_malloc_create" 00:11:14.191 }, 00:11:14.191 { 00:11:14.191 "params": { 00:11:14.191 "filename": "/dev/zram1", 00:11:14.191 "name": "uring0" 00:11:14.191 }, 00:11:14.191 "method": "bdev_uring_create" 00:11:14.191 }, 00:11:14.191 { 00:11:14.191 "method": "bdev_wait_for_examine" 00:11:14.191 } 00:11:14.191 ] 00:11:14.191 } 00:11:14.191 ] 00:11:14.191 } 00:11:14.191 [2024-11-20 05:21:28.643509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.191 [2024-11-20 05:21:28.687678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.449 [2024-11-20 05:21:28.725020] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:15.380  [2024-11-20T05:21:31.269Z] Copying: 196/512 [MB] (196 MBps) [2024-11-20T05:21:31.528Z] Copying: 400/512 [MB] (204 MBps) [2024-11-20T05:21:31.787Z] Copying: 512/512 [MB] (average 202 MBps) 00:11:17.274 00:11:17.274 05:21:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:11:17.274 05:21:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:11:17.274 05:21:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:17.274 05:21:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:17.274 [2024-11-20 05:21:31.677035] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:17.274 [2024-11-20 05:21:31.677137] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61490 ] 00:11:17.274 { 00:11:17.274 "subsystems": [ 00:11:17.274 { 00:11:17.274 "subsystem": "bdev", 00:11:17.274 "config": [ 00:11:17.274 { 00:11:17.274 "params": { 00:11:17.274 "block_size": 512, 00:11:17.274 "num_blocks": 1048576, 00:11:17.274 "name": "malloc0" 00:11:17.274 }, 00:11:17.274 "method": "bdev_malloc_create" 00:11:17.274 }, 00:11:17.274 { 00:11:17.274 "params": { 00:11:17.274 "filename": "/dev/zram1", 00:11:17.274 "name": "uring0" 00:11:17.274 }, 00:11:17.274 "method": "bdev_uring_create" 00:11:17.274 }, 00:11:17.274 { 00:11:17.274 "method": "bdev_wait_for_examine" 00:11:17.274 } 00:11:17.274 ] 00:11:17.274 } 00:11:17.274 ] 00:11:17.274 } 00:11:17.532 [2024-11-20 05:21:31.824713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.532 [2024-11-20 05:21:31.857836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.532 [2024-11-20 05:21:31.887852] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:18.907  [2024-11-20T05:21:34.358Z] Copying: 171/512 [MB] (171 MBps) [2024-11-20T05:21:35.293Z] Copying: 335/512 [MB] (163 MBps) [2024-11-20T05:21:35.293Z] Copying: 489/512 [MB] (153 MBps) [2024-11-20T05:21:35.553Z] Copying: 512/512 [MB] (average 162 MBps) 00:11:21.040 00:11:21.040 05:21:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:11:21.040 05:21:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 46au101u1zp10ze4yqm9b2uah7rk3rfriq3c48r3l4m2nj3qjivk49i4z82w0kl40s7b5729xwzzwwijge2qeksotxi6mfkbk8r2tg2m2fjnpvbhns3rwckhjjbpj4hw4po66g84f3tgsiqi6r84n6kx3gst5pxl9mcnq477f9rtas4gyo74vx0b6mdm7ruq3qpkwriekurp3rq58lnyg0z85i15bfuhtrdijcqvvx2umcnuu2v3xof94sp8nmndy66sx2oxe7x9ut8kgpm7k7qjq7pwc9iedhb10zi794cwhhzwwsj4zjjvyucp3o2ihx8fik4fiv7c5pkmulqp0ig24r58jmzlb4i9u48pgzgtosv2ywds834mjli81okgxt6d0n0afpa05gj2891iqubhhcxczbtfu1k25icegaetnzgullgpn28b04ad76275lweu4kpc53dz6durrlym18mnr9d7jlk6yk0mo73oawwwta04qnpsi37rogkhzd053yr8wbyztfzt6ufy5hprmj47tirri6ukm9e6g71kmx275nz37z1bnb2qwilxoro8ir9uipe2kr86xp713tgi08xkqvhdrvf51vwjba8ipzyt00hvnf4hib1y4t83gb5sxo29uuh7tdycoui0w30ze0z5rjhmnhgcrrjmrohgr5c73vfu3g20thw0q8n9l863r628u83fciok5sbyt3n5nlr4pnbrd84ku9z67aa7ivn93vtqd17dfgoyapwomv2i2khnhkj395xfzw1kzaarsq8brmyfop3uctp8p827pxnuh9yetlwd07w1fu1i69vqd3h3nwxzn0p5ws7qiadnm4gpr77ipbwd6wiq50eazc4gzufufuos7e6ks2i504mb2wz5qs0urcuyb72wn9d9esqtn5256z5hf013elihqbswjqgs3iy71w5oclffytx98l4xxw9c3pfa2xwgb09dm2w64jnjt8500cbts9c5szcp6ty1olkph3ap1cbbq1e == \4\6\a\u\1\0\1\u\1\z\p\1\0\z\e\4\y\q\m\9\b\2\u\a\h\7\r\k\3\r\f\r\i\q\3\c\4\8\r\3\l\4\m\2\n\j\3\q\j\i\v\k\4\9\i\4\z\8\2\w\0\k\l\4\0\s\7\b\5\7\2\9\x\w\z\z\w\w\i\j\g\e\2\q\e\k\s\o\t\x\i\6\m\f\k\b\k\8\r\2\t\g\2\m\2\f\j\n\p\v\b\h\n\s\3\r\w\c\k\h\j\j\b\p\j\4\h\w\4\p\o\6\6\g\8\4\f\3\t\g\s\i\q\i\6\r\8\4\n\6\k\x\3\g\s\t\5\p\x\l\9\m\c\n\q\4\7\7\f\9\r\t\a\s\4\g\y\o\7\4\v\x\0\b\6\m\d\m\7\r\u\q\3\q\p\k\w\r\i\e\k\u\r\p\3\r\q\5\8\l\n\y\g\0\z\8\5\i\1\5\b\f\u\h\t\r\d\i\j\c\q\v\v\x\2\u\m\c\n\u\u\2\v\3\x\o\f\9\4\s\p\8\n\m\n\d\y\6\6\s\x\2\o\x\e\7\x\9\u\t\8\k\g\p\m\7\k\7\q\j\q\7\p\w\c\9\i\e\d\h\b\1\0\z\i\7\9\4\c\w\h\h\z\w\w\s\j\4\z\j\j\v\y\u\c\p\3\o\2\i\h\x\8\f\i\k\4\f\i\v\7\c\5\p\k\m\u\l\q\p\0\i\g\2\4\r\5\8\j\m\z\l\b\4\i\9\u\4\8\p\g\z\g\t\o\s\v\2\y\w\d\s\8\3\4\m\j\l\i\8\1\o\k\g\x\t\6\d\0\n\0\a\f\p\a\0\5\g\j\2\8\9\1\i\q\u\b\h\h\c\x\c\z\b\t\f\u\1\k\2\5\i\c\e\g\a\e\t\n\z\g\u\l\l\g\p\n\2\8\b\0\4\a\d\7\6\2\7\5\l\w\e\u\4\k\p\c\5\3\d\z\6\d\u\r\r\l\y\m\1\8\m\n\r\9\d\7\j\l\k\6\y\k\0\m\o\7\3\o\a\w\w\w\t\a\0\4\q\n\p\s\i\3\7\r\o\g\k\h\z\d\0\5\3\y\r\8\w\b\y\z\t\f\z\t\6\u\f\y\5\h\p\r\m\j\4\7\t\i\r\r\i\6\u\k\m\9\e\6\g\7\1\k\m\x\2\7\5\n\z\3\7\z\1\b\n\b\2\q\w\i\l\x\o\r\o\8\i\r\9\u\i\p\e\2\k\r\8\6\x\p\7\1\3\t\g\i\0\8\x\k\q\v\h\d\r\v\f\5\1\v\w\j\b\a\8\i\p\z\y\t\0\0\h\v\n\f\4\h\i\b\1\y\4\t\8\3\g\b\5\s\x\o\2\9\u\u\h\7\t\d\y\c\o\u\i\0\w\3\0\z\e\0\z\5\r\j\h\m\n\h\g\c\r\r\j\m\r\o\h\g\r\5\c\7\3\v\f\u\3\g\2\0\t\h\w\0\q\8\n\9\l\8\6\3\r\6\2\8\u\8\3\f\c\i\o\k\5\s\b\y\t\3\n\5\n\l\r\4\p\n\b\r\d\8\4\k\u\9\z\6\7\a\a\7\i\v\n\9\3\v\t\q\d\1\7\d\f\g\o\y\a\p\w\o\m\v\2\i\2\k\h\n\h\k\j\3\9\5\x\f\z\w\1\k\z\a\a\r\s\q\8\b\r\m\y\f\o\p\3\u\c\t\p\8\p\8\2\7\p\x\n\u\h\9\y\e\t\l\w\d\0\7\w\1\f\u\1\i\6\9\v\q\d\3\h\3\n\w\x\z\n\0\p\5\w\s\7\q\i\a\d\n\m\4\g\p\r\7\7\i\p\b\w\d\6\w\i\q\5\0\e\a\z\c\4\g\z\u\f\u\f\u\o\s\7\e\6\k\s\2\i\5\0\4\m\b\2\w\z\5\q\s\0\u\r\c\u\y\b\7\2\w\n\9\d\9\e\s\q\t\n\5\2\5\6\z\5\h\f\0\1\3\e\l\i\h\q\b\s\w\j\q\g\s\3\i\y\7\1\w\5\o\c\l\f\f\y\t\x\9\8\l\4\x\x\w\9\c\3\p\f\a\2\x\w\g\b\0\9\d\m\2\w\6\4\j\n\j\t\8\5\0\0\c\b\t\s\9\c\5\s\z\c\p\6\t\y\1\o\l\k\p\h\3\a\p\1\c\b\b\q\1\e ]] 00:11:21.040 05:21:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:11:21.041 05:21:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 46au101u1zp10ze4yqm9b2uah7rk3rfriq3c48r3l4m2nj3qjivk49i4z82w0kl40s7b5729xwzzwwijge2qeksotxi6mfkbk8r2tg2m2fjnpvbhns3rwckhjjbpj4hw4po66g84f3tgsiqi6r84n6kx3gst5pxl9mcnq477f9rtas4gyo74vx0b6mdm7ruq3qpkwriekurp3rq58lnyg0z85i15bfuhtrdijcqvvx2umcnuu2v3xof94sp8nmndy66sx2oxe7x9ut8kgpm7k7qjq7pwc9iedhb10zi794cwhhzwwsj4zjjvyucp3o2ihx8fik4fiv7c5pkmulqp0ig24r58jmzlb4i9u48pgzgtosv2ywds834mjli81okgxt6d0n0afpa05gj2891iqubhhcxczbtfu1k25icegaetnzgullgpn28b04ad76275lweu4kpc53dz6durrlym18mnr9d7jlk6yk0mo73oawwwta04qnpsi37rogkhzd053yr8wbyztfzt6ufy5hprmj47tirri6ukm9e6g71kmx275nz37z1bnb2qwilxoro8ir9uipe2kr86xp713tgi08xkqvhdrvf51vwjba8ipzyt00hvnf4hib1y4t83gb5sxo29uuh7tdycoui0w30ze0z5rjhmnhgcrrjmrohgr5c73vfu3g20thw0q8n9l863r628u83fciok5sbyt3n5nlr4pnbrd84ku9z67aa7ivn93vtqd17dfgoyapwomv2i2khnhkj395xfzw1kzaarsq8brmyfop3uctp8p827pxnuh9yetlwd07w1fu1i69vqd3h3nwxzn0p5ws7qiadnm4gpr77ipbwd6wiq50eazc4gzufufuos7e6ks2i504mb2wz5qs0urcuyb72wn9d9esqtn5256z5hf013elihqbswjqgs3iy71w5oclffytx98l4xxw9c3pfa2xwgb09dm2w64jnjt8500cbts9c5szcp6ty1olkph3ap1cbbq1e == \4\6\a\u\1\0\1\u\1\z\p\1\0\z\e\4\y\q\m\9\b\2\u\a\h\7\r\k\3\r\f\r\i\q\3\c\4\8\r\3\l\4\m\2\n\j\3\q\j\i\v\k\4\9\i\4\z\8\2\w\0\k\l\4\0\s\7\b\5\7\2\9\x\w\z\z\w\w\i\j\g\e\2\q\e\k\s\o\t\x\i\6\m\f\k\b\k\8\r\2\t\g\2\m\2\f\j\n\p\v\b\h\n\s\3\r\w\c\k\h\j\j\b\p\j\4\h\w\4\p\o\6\6\g\8\4\f\3\t\g\s\i\q\i\6\r\8\4\n\6\k\x\3\g\s\t\5\p\x\l\9\m\c\n\q\4\7\7\f\9\r\t\a\s\4\g\y\o\7\4\v\x\0\b\6\m\d\m\7\r\u\q\3\q\p\k\w\r\i\e\k\u\r\p\3\r\q\5\8\l\n\y\g\0\z\8\5\i\1\5\b\f\u\h\t\r\d\i\j\c\q\v\v\x\2\u\m\c\n\u\u\2\v\3\x\o\f\9\4\s\p\8\n\m\n\d\y\6\6\s\x\2\o\x\e\7\x\9\u\t\8\k\g\p\m\7\k\7\q\j\q\7\p\w\c\9\i\e\d\h\b\1\0\z\i\7\9\4\c\w\h\h\z\w\w\s\j\4\z\j\j\v\y\u\c\p\3\o\2\i\h\x\8\f\i\k\4\f\i\v\7\c\5\p\k\m\u\l\q\p\0\i\g\2\4\r\5\8\j\m\z\l\b\4\i\9\u\4\8\p\g\z\g\t\o\s\v\2\y\w\d\s\8\3\4\m\j\l\i\8\1\o\k\g\x\t\6\d\0\n\0\a\f\p\a\0\5\g\j\2\8\9\1\i\q\u\b\h\h\c\x\c\z\b\t\f\u\1\k\2\5\i\c\e\g\a\e\t\n\z\g\u\l\l\g\p\n\2\8\b\0\4\a\d\7\6\2\7\5\l\w\e\u\4\k\p\c\5\3\d\z\6\d\u\r\r\l\y\m\1\8\m\n\r\9\d\7\j\l\k\6\y\k\0\m\o\7\3\o\a\w\w\w\t\a\0\4\q\n\p\s\i\3\7\r\o\g\k\h\z\d\0\5\3\y\r\8\w\b\y\z\t\f\z\t\6\u\f\y\5\h\p\r\m\j\4\7\t\i\r\r\i\6\u\k\m\9\e\6\g\7\1\k\m\x\2\7\5\n\z\3\7\z\1\b\n\b\2\q\w\i\l\x\o\r\o\8\i\r\9\u\i\p\e\2\k\r\8\6\x\p\7\1\3\t\g\i\0\8\x\k\q\v\h\d\r\v\f\5\1\v\w\j\b\a\8\i\p\z\y\t\0\0\h\v\n\f\4\h\i\b\1\y\4\t\8\3\g\b\5\s\x\o\2\9\u\u\h\7\t\d\y\c\o\u\i\0\w\3\0\z\e\0\z\5\r\j\h\m\n\h\g\c\r\r\j\m\r\o\h\g\r\5\c\7\3\v\f\u\3\g\2\0\t\h\w\0\q\8\n\9\l\8\6\3\r\6\2\8\u\8\3\f\c\i\o\k\5\s\b\y\t\3\n\5\n\l\r\4\p\n\b\r\d\8\4\k\u\9\z\6\7\a\a\7\i\v\n\9\3\v\t\q\d\1\7\d\f\g\o\y\a\p\w\o\m\v\2\i\2\k\h\n\h\k\j\3\9\5\x\f\z\w\1\k\z\a\a\r\s\q\8\b\r\m\y\f\o\p\3\u\c\t\p\8\p\8\2\7\p\x\n\u\h\9\y\e\t\l\w\d\0\7\w\1\f\u\1\i\6\9\v\q\d\3\h\3\n\w\x\z\n\0\p\5\w\s\7\q\i\a\d\n\m\4\g\p\r\7\7\i\p\b\w\d\6\w\i\q\5\0\e\a\z\c\4\g\z\u\f\u\f\u\o\s\7\e\6\k\s\2\i\5\0\4\m\b\2\w\z\5\q\s\0\u\r\c\u\y\b\7\2\w\n\9\d\9\e\s\q\t\n\5\2\5\6\z\5\h\f\0\1\3\e\l\i\h\q\b\s\w\j\q\g\s\3\i\y\7\1\w\5\o\c\l\f\f\y\t\x\9\8\l\4\x\x\w\9\c\3\p\f\a\2\x\w\g\b\0\9\d\m\2\w\6\4\j\n\j\t\8\5\0\0\c\b\t\s\9\c\5\s\z\c\p\6\t\y\1\o\l\k\p\h\3\a\p\1\c\b\b\q\1\e ]] 00:11:21.041 05:21:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:11:21.300 05:21:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:11:21.300 05:21:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:11:21.300 05:21:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:21.300 05:21:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:21.583 [2024-11-20 05:21:35.813857] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:21.583 [2024-11-20 05:21:35.813967] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61563 ] 00:11:21.583 { 00:11:21.583 "subsystems": [ 00:11:21.583 { 00:11:21.583 "subsystem": "bdev", 00:11:21.583 "config": [ 00:11:21.583 { 00:11:21.583 "params": { 00:11:21.583 "block_size": 512, 00:11:21.583 "num_blocks": 1048576, 00:11:21.583 "name": "malloc0" 00:11:21.583 }, 00:11:21.583 "method": "bdev_malloc_create" 00:11:21.583 }, 00:11:21.583 { 00:11:21.583 "params": { 00:11:21.583 "filename": "/dev/zram1", 00:11:21.583 "name": "uring0" 00:11:21.583 }, 00:11:21.583 "method": "bdev_uring_create" 00:11:21.583 }, 00:11:21.583 { 00:11:21.583 "method": "bdev_wait_for_examine" 00:11:21.583 } 00:11:21.583 ] 00:11:21.583 } 00:11:21.583 ] 00:11:21.583 } 00:11:21.583 [2024-11-20 05:21:35.959017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.583 [2024-11-20 05:21:35.993114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.583 [2024-11-20 05:21:36.023788] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:22.988  [2024-11-20T05:21:38.436Z] Copying: 147/512 [MB] (147 MBps) [2024-11-20T05:21:39.371Z] Copying: 297/512 [MB] (149 MBps) [2024-11-20T05:21:39.938Z] Copying: 445/512 [MB] (148 MBps) [2024-11-20T05:21:39.938Z] Copying: 512/512 [MB] (average 146 MBps) 00:11:25.425 00:11:25.425 05:21:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:11:25.425 05:21:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:11:25.425 05:21:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:11:25.425 05:21:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:11:25.425 05:21:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:11:25.425 05:21:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:11:25.425 05:21:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:25.425 05:21:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:25.425 { 00:11:25.425 "subsystems": [ 00:11:25.425 { 00:11:25.425 "subsystem": "bdev", 00:11:25.425 "config": [ 00:11:25.425 { 00:11:25.425 "params": { 00:11:25.425 "block_size": 512, 00:11:25.425 "num_blocks": 1048576, 00:11:25.425 "name": "malloc0" 00:11:25.425 }, 00:11:25.425 "method": "bdev_malloc_create" 00:11:25.425 }, 00:11:25.425 { 00:11:25.425 "params": { 00:11:25.425 "filename": "/dev/zram1", 00:11:25.425 "name": "uring0" 00:11:25.425 }, 00:11:25.425 "method": "bdev_uring_create" 00:11:25.425 }, 00:11:25.425 { 00:11:25.425 "params": { 00:11:25.425 "name": "uring0" 00:11:25.425 }, 00:11:25.425 "method": "bdev_uring_delete" 00:11:25.425 }, 00:11:25.425 { 00:11:25.425 "method": "bdev_wait_for_examine" 00:11:25.425 } 00:11:25.425 ] 00:11:25.425 } 00:11:25.425 ] 00:11:25.425 } 00:11:25.683 [2024-11-20 05:21:39.942121] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:25.683 [2024-11-20 05:21:39.942284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61619 ] 00:11:25.683 [2024-11-20 05:21:40.097206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.683 [2024-11-20 05:21:40.130849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.683 [2024-11-20 05:21:40.162130] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:25.941  [2024-11-20T05:21:40.712Z] Copying: 0/0 [B] (average 0 Bps) 00:11:26.199 00:11:26.199 05:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:11:26.199 05:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:11:26.199 05:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:11:26.199 05:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:11:26.199 05:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:11:26.199 05:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:26.199 05:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:26.199 05:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:26.199 05:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:26.199 05:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:26.199 05:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:26.199 05:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:26.199 05:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:26.199 05:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:26.199 05:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:26.199 05:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:11:26.199 [2024-11-20 05:21:40.573602] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:26.199 [2024-11-20 05:21:40.573689] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61648 ] 00:11:26.199 { 00:11:26.199 "subsystems": [ 00:11:26.199 { 00:11:26.199 "subsystem": "bdev", 00:11:26.199 "config": [ 00:11:26.199 { 00:11:26.199 "params": { 00:11:26.199 "block_size": 512, 00:11:26.199 "num_blocks": 1048576, 00:11:26.199 "name": "malloc0" 00:11:26.199 }, 00:11:26.199 "method": "bdev_malloc_create" 00:11:26.199 }, 00:11:26.199 { 00:11:26.199 "params": { 00:11:26.199 "filename": "/dev/zram1", 00:11:26.199 "name": "uring0" 00:11:26.199 }, 00:11:26.199 "method": "bdev_uring_create" 00:11:26.199 }, 00:11:26.199 { 00:11:26.199 "params": { 00:11:26.199 "name": "uring0" 00:11:26.199 }, 00:11:26.199 "method": "bdev_uring_delete" 00:11:26.199 }, 00:11:26.199 { 00:11:26.199 "method": "bdev_wait_for_examine" 00:11:26.199 } 00:11:26.199 ] 00:11:26.199 } 00:11:26.199 ] 00:11:26.199 } 00:11:26.457 [2024-11-20 05:21:40.718182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.457 [2024-11-20 05:21:40.750482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.457 [2024-11-20 05:21:40.779647] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:26.457 [2024-11-20 05:21:40.904252] bdev.c:8480:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:11:26.457 [2024-11-20 05:21:40.904308] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:11:26.457 [2024-11-20 05:21:40.904319] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:11:26.457 [2024-11-20 05:21:40.904330] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:26.715 [2024-11-20 05:21:41.069397] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:26.715 05:21:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:11:26.715 05:21:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:26.715 05:21:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:11:26.715 05:21:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:11:26.715 05:21:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:11:26.715 05:21:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:26.715 05:21:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:11:26.715 05:21:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:11:26.715 05:21:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:11:26.715 05:21:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:11:26.715 05:21:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:11:26.715 05:21:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:11:26.973 00:11:26.973 real 0m14.058s 00:11:26.973 user 0m9.551s 00:11:26.973 sys 0m12.425s 00:11:26.973 05:21:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:26.973 05:21:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:26.973 ************************************ 00:11:26.973 END TEST dd_uring_copy 00:11:26.973 ************************************ 00:11:26.973 00:11:26.973 real 0m14.281s 00:11:26.973 user 0m9.679s 00:11:26.973 sys 0m12.518s 00:11:26.973 05:21:41 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:26.973 05:21:41 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:11:26.973 ************************************ 00:11:26.973 END TEST spdk_dd_uring 00:11:26.973 ************************************ 00:11:26.973 05:21:41 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:11:26.973 05:21:41 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:26.973 05:21:41 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:26.973 05:21:41 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:27.232 ************************************ 00:11:27.232 START TEST spdk_dd_sparse 00:11:27.232 ************************************ 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:11:27.232 * Looking for test storage... 00:11:27.232 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # lcov --version 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:27.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.232 --rc genhtml_branch_coverage=1 00:11:27.232 --rc genhtml_function_coverage=1 00:11:27.232 --rc genhtml_legend=1 00:11:27.232 --rc geninfo_all_blocks=1 00:11:27.232 --rc geninfo_unexecuted_blocks=1 00:11:27.232 00:11:27.232 ' 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:27.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.232 --rc genhtml_branch_coverage=1 00:11:27.232 --rc genhtml_function_coverage=1 00:11:27.232 --rc genhtml_legend=1 00:11:27.232 --rc geninfo_all_blocks=1 00:11:27.232 --rc geninfo_unexecuted_blocks=1 00:11:27.232 00:11:27.232 ' 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:27.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.232 --rc genhtml_branch_coverage=1 00:11:27.232 --rc genhtml_function_coverage=1 00:11:27.232 --rc genhtml_legend=1 00:11:27.232 --rc geninfo_all_blocks=1 00:11:27.232 --rc geninfo_unexecuted_blocks=1 00:11:27.232 00:11:27.232 ' 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:27.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.232 --rc genhtml_branch_coverage=1 00:11:27.232 --rc genhtml_function_coverage=1 00:11:27.232 --rc genhtml_legend=1 00:11:27.232 --rc geninfo_all_blocks=1 00:11:27.232 --rc geninfo_unexecuted_blocks=1 00:11:27.232 00:11:27.232 ' 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:11:27.232 05:21:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:11:27.233 05:21:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:11:27.233 05:21:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:11:27.233 05:21:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:11:27.233 05:21:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:11:27.233 1+0 records in 00:11:27.233 1+0 records out 00:11:27.233 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00536785 s, 781 MB/s 00:11:27.233 05:21:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:11:27.233 1+0 records in 00:11:27.233 1+0 records out 00:11:27.233 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00527259 s, 795 MB/s 00:11:27.233 05:21:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:11:27.233 1+0 records in 00:11:27.233 1+0 records out 00:11:27.233 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00565517 s, 742 MB/s 00:11:27.233 05:21:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:11:27.233 05:21:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:27.233 05:21:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:27.233 05:21:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:11:27.233 ************************************ 00:11:27.233 START TEST dd_sparse_file_to_file 00:11:27.233 ************************************ 00:11:27.233 05:21:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1127 -- # file_to_file 00:11:27.233 05:21:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:11:27.233 05:21:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:11:27.233 05:21:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:11:27.233 05:21:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:11:27.233 05:21:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:11:27.233 05:21:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:11:27.233 05:21:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:11:27.233 05:21:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:11:27.233 05:21:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:11:27.233 05:21:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:11:27.491 [2024-11-20 05:21:41.759241] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:27.491 [2024-11-20 05:21:41.759358] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61742 ] 00:11:27.491 { 00:11:27.491 "subsystems": [ 00:11:27.491 { 00:11:27.491 "subsystem": "bdev", 00:11:27.491 "config": [ 00:11:27.491 { 00:11:27.491 "params": { 00:11:27.491 "block_size": 4096, 00:11:27.491 "filename": "dd_sparse_aio_disk", 00:11:27.491 "name": "dd_aio" 00:11:27.491 }, 00:11:27.491 "method": "bdev_aio_create" 00:11:27.491 }, 00:11:27.491 { 00:11:27.491 "params": { 00:11:27.491 "lvs_name": "dd_lvstore", 00:11:27.491 "bdev_name": "dd_aio" 00:11:27.491 }, 00:11:27.491 "method": "bdev_lvol_create_lvstore" 00:11:27.491 }, 00:11:27.491 { 00:11:27.491 "method": "bdev_wait_for_examine" 00:11:27.491 } 00:11:27.491 ] 00:11:27.491 } 00:11:27.491 ] 00:11:27.491 } 00:11:27.491 [2024-11-20 05:21:41.911615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.491 [2024-11-20 05:21:41.949645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.491 [2024-11-20 05:21:41.982159] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:27.749  [2024-11-20T05:21:42.262Z] Copying: 12/36 [MB] (average 1000 MBps) 00:11:27.749 00:11:27.749 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:11:27.749 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:11:27.749 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:11:27.749 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:11:27.749 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:11:27.749 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:11:27.749 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:11:27.749 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:11:28.007 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:11:28.007 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:11:28.007 00:11:28.007 real 0m0.554s 00:11:28.007 user 0m0.364s 00:11:28.007 sys 0m0.238s 00:11:28.007 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:28.007 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:11:28.007 ************************************ 00:11:28.007 END TEST dd_sparse_file_to_file 00:11:28.007 ************************************ 00:11:28.007 05:21:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:11:28.007 05:21:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:28.007 05:21:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:28.007 05:21:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:11:28.007 ************************************ 00:11:28.007 START TEST dd_sparse_file_to_bdev 00:11:28.007 ************************************ 00:11:28.007 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1127 -- # file_to_bdev 00:11:28.007 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:11:28.007 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:11:28.007 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:11:28.007 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:11:28.007 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:11:28.007 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:11:28.007 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:28.007 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:28.007 [2024-11-20 05:21:42.364034] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:28.007 [2024-11-20 05:21:42.364122] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61790 ] 00:11:28.007 { 00:11:28.007 "subsystems": [ 00:11:28.007 { 00:11:28.007 "subsystem": "bdev", 00:11:28.007 "config": [ 00:11:28.007 { 00:11:28.007 "params": { 00:11:28.007 "block_size": 4096, 00:11:28.007 "filename": "dd_sparse_aio_disk", 00:11:28.007 "name": "dd_aio" 00:11:28.007 }, 00:11:28.007 "method": "bdev_aio_create" 00:11:28.007 }, 00:11:28.007 { 00:11:28.007 "params": { 00:11:28.007 "lvs_name": "dd_lvstore", 00:11:28.007 "lvol_name": "dd_lvol", 00:11:28.007 "size_in_mib": 36, 00:11:28.007 "thin_provision": true 00:11:28.007 }, 00:11:28.007 "method": "bdev_lvol_create" 00:11:28.007 }, 00:11:28.007 { 00:11:28.007 "method": "bdev_wait_for_examine" 00:11:28.007 } 00:11:28.007 ] 00:11:28.007 } 00:11:28.007 ] 00:11:28.007 } 00:11:28.007 [2024-11-20 05:21:42.515891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.266 [2024-11-20 05:21:42.555396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.266 [2024-11-20 05:21:42.587702] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:28.266  [2024-11-20T05:21:43.037Z] Copying: 12/36 [MB] (average 521 MBps) 00:11:28.524 00:11:28.524 00:11:28.524 real 0m0.518s 00:11:28.524 user 0m0.339s 00:11:28.524 sys 0m0.244s 00:11:28.524 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:28.524 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:28.524 ************************************ 00:11:28.524 END TEST dd_sparse_file_to_bdev 00:11:28.524 ************************************ 00:11:28.524 05:21:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:11:28.524 05:21:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:28.524 05:21:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:28.524 05:21:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:11:28.524 ************************************ 00:11:28.524 START TEST dd_sparse_bdev_to_file 00:11:28.524 ************************************ 00:11:28.524 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1127 -- # bdev_to_file 00:11:28.524 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:11:28.524 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:11:28.524 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:11:28.524 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:11:28.524 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:11:28.524 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:11:28.524 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:11:28.524 05:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:11:28.524 [2024-11-20 05:21:42.927463] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:28.524 [2024-11-20 05:21:42.927552] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61817 ] 00:11:28.524 { 00:11:28.524 "subsystems": [ 00:11:28.524 { 00:11:28.524 "subsystem": "bdev", 00:11:28.524 "config": [ 00:11:28.524 { 00:11:28.524 "params": { 00:11:28.524 "block_size": 4096, 00:11:28.524 "filename": "dd_sparse_aio_disk", 00:11:28.524 "name": "dd_aio" 00:11:28.524 }, 00:11:28.524 "method": "bdev_aio_create" 00:11:28.524 }, 00:11:28.524 { 00:11:28.524 "method": "bdev_wait_for_examine" 00:11:28.524 } 00:11:28.524 ] 00:11:28.524 } 00:11:28.524 ] 00:11:28.524 } 00:11:28.782 [2024-11-20 05:21:43.075846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.782 [2024-11-20 05:21:43.114012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.782 [2024-11-20 05:21:43.146268] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:28.782  [2024-11-20T05:21:43.553Z] Copying: 12/36 [MB] (average 1200 MBps) 00:11:29.040 00:11:29.040 05:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:11:29.040 05:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:11:29.040 05:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:11:29.040 05:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:11:29.040 05:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:11:29.040 05:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:11:29.040 05:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:11:29.040 05:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:11:29.040 05:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:11:29.040 05:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:11:29.040 00:11:29.040 real 0m0.523s 00:11:29.040 user 0m0.300s 00:11:29.040 sys 0m0.271s 00:11:29.040 05:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:29.040 ************************************ 00:11:29.040 END TEST dd_sparse_bdev_to_file 00:11:29.040 ************************************ 00:11:29.040 05:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:11:29.040 05:21:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:11:29.040 05:21:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:11:29.040 05:21:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:11:29.040 05:21:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:11:29.040 05:21:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:11:29.040 00:11:29.040 real 0m1.975s 00:11:29.040 user 0m1.188s 00:11:29.040 sys 0m0.949s 00:11:29.040 05:21:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:29.040 05:21:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:11:29.040 ************************************ 00:11:29.040 END TEST spdk_dd_sparse 00:11:29.040 ************************************ 00:11:29.041 05:21:43 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:11:29.041 05:21:43 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:29.041 05:21:43 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:29.041 05:21:43 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:29.041 ************************************ 00:11:29.041 START TEST spdk_dd_negative 00:11:29.041 ************************************ 00:11:29.041 05:21:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:11:29.299 * Looking for test storage... 00:11:29.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:29.299 05:21:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:29.299 05:21:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # lcov --version 00:11:29.299 05:21:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:29.299 05:21:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:29.299 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.299 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.299 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.299 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.299 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.299 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.299 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.299 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.299 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.299 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.299 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.299 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:11:29.299 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:11:29.299 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.299 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.299 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:29.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.300 --rc genhtml_branch_coverage=1 00:11:29.300 --rc genhtml_function_coverage=1 00:11:29.300 --rc genhtml_legend=1 00:11:29.300 --rc geninfo_all_blocks=1 00:11:29.300 --rc geninfo_unexecuted_blocks=1 00:11:29.300 00:11:29.300 ' 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:29.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.300 --rc genhtml_branch_coverage=1 00:11:29.300 --rc genhtml_function_coverage=1 00:11:29.300 --rc genhtml_legend=1 00:11:29.300 --rc geninfo_all_blocks=1 00:11:29.300 --rc geninfo_unexecuted_blocks=1 00:11:29.300 00:11:29.300 ' 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:29.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.300 --rc genhtml_branch_coverage=1 00:11:29.300 --rc genhtml_function_coverage=1 00:11:29.300 --rc genhtml_legend=1 00:11:29.300 --rc geninfo_all_blocks=1 00:11:29.300 --rc geninfo_unexecuted_blocks=1 00:11:29.300 00:11:29.300 ' 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:29.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.300 --rc genhtml_branch_coverage=1 00:11:29.300 --rc genhtml_function_coverage=1 00:11:29.300 --rc genhtml_legend=1 00:11:29.300 --rc geninfo_all_blocks=1 00:11:29.300 --rc geninfo_unexecuted_blocks=1 00:11:29.300 00:11:29.300 ' 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:29.300 ************************************ 00:11:29.300 START TEST dd_invalid_arguments 00:11:29.300 ************************************ 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1127 -- # invalid_arguments 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:29.300 05:21:43 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:11:29.300 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:11:29.300 00:11:29.300 CPU options: 00:11:29.300 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:11:29.300 (like [0,1,10]) 00:11:29.300 --lcores lcore to CPU mapping list. The list is in the format: 00:11:29.300 [<,lcores[@CPUs]>...] 00:11:29.300 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:11:29.300 Within the group, '-' is used for range separator, 00:11:29.300 ',' is used for single number separator. 00:11:29.300 '( )' can be omitted for single element group, 00:11:29.300 '@' can be omitted if cpus and lcores have the same value 00:11:29.300 --disable-cpumask-locks Disable CPU core lock files. 00:11:29.300 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:11:29.300 pollers in the app support interrupt mode) 00:11:29.300 -p, --main-core main (primary) core for DPDK 00:11:29.300 00:11:29.300 Configuration options: 00:11:29.300 -c, --config, --json JSON config file 00:11:29.300 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:11:29.300 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:11:29.300 --wait-for-rpc wait for RPCs to initialize subsystems 00:11:29.300 --rpcs-allowed comma-separated list of permitted RPCS 00:11:29.300 --json-ignore-init-errors don't exit on invalid config entry 00:11:29.300 00:11:29.300 Memory options: 00:11:29.300 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:11:29.300 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:11:29.300 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:11:29.300 -R, --huge-unlink unlink huge files after initialization 00:11:29.300 -n, --mem-channels number of memory channels used for DPDK 00:11:29.300 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:11:29.300 --msg-mempool-size global message memory pool size in count (default: 262143) 00:11:29.300 --no-huge run without using hugepages 00:11:29.300 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:11:29.300 -i, --shm-id shared memory ID (optional) 00:11:29.300 -g, --single-file-segments force creating just one hugetlbfs file 00:11:29.300 00:11:29.300 PCI options: 00:11:29.300 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:11:29.300 -B, --pci-blocked pci addr to block (can be used more than once) 00:11:29.300 -u, --no-pci disable PCI access 00:11:29.300 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:11:29.300 00:11:29.301 Log options: 00:11:29.301 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:11:29.301 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:11:29.301 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:11:29.301 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:11:29.301 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:11:29.301 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:11:29.301 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:11:29.301 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:11:29.301 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:11:29.301 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:11:29.301 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:11:29.301 --silence-noticelog disable notice level logging to stderr 00:11:29.301 00:11:29.301 Trace options: 00:11:29.301 --num-trace-entries number of trace entries for each core, must be power of 2, 00:11:29.301 setting 0 to disable trace (default 32768) 00:11:29.301 Tracepoints vary in size and can use more than one trace entry. 00:11:29.301 -e, --tpoint-group [:] 00:11:29.301 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:11:29.301 [2024-11-20 05:21:43.764340] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:11:29.301 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:11:29.301 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:11:29.301 bdev_raid, scheduler, all). 00:11:29.301 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:11:29.301 a tracepoint group. First tpoint inside a group can be enabled by 00:11:29.301 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:11:29.301 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:11:29.301 in /include/spdk_internal/trace_defs.h 00:11:29.301 00:11:29.301 Other options: 00:11:29.301 -h, --help show this usage 00:11:29.301 -v, --version print SPDK version 00:11:29.301 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:11:29.301 --env-context Opaque context for use of the env implementation 00:11:29.301 00:11:29.301 Application specific: 00:11:29.301 [--------- DD Options ---------] 00:11:29.301 --if Input file. Must specify either --if or --ib. 00:11:29.301 --ib Input bdev. Must specifier either --if or --ib 00:11:29.301 --of Output file. Must specify either --of or --ob. 00:11:29.301 --ob Output bdev. Must specify either --of or --ob. 00:11:29.301 --iflag Input file flags. 00:11:29.301 --oflag Output file flags. 00:11:29.301 --bs I/O unit size (default: 4096) 00:11:29.301 --qd Queue depth (default: 2) 00:11:29.301 --count I/O unit count. The number of I/O units to copy. (default: all) 00:11:29.301 --skip Skip this many I/O units at start of input. (default: 0) 00:11:29.301 --seek Skip this many I/O units at start of output. (default: 0) 00:11:29.301 --aio Force usage of AIO. (by default io_uring is used if available) 00:11:29.301 --sparse Enable hole skipping in input target 00:11:29.301 Available iflag and oflag values: 00:11:29.301 append - append mode 00:11:29.301 direct - use direct I/O for data 00:11:29.301 directory - fail unless a directory 00:11:29.301 dsync - use synchronized I/O for data 00:11:29.301 noatime - do not update access time 00:11:29.301 noctty - do not assign controlling terminal from file 00:11:29.301 nofollow - do not follow symlinks 00:11:29.301 nonblock - use non-blocking I/O 00:11:29.301 sync - use synchronized I/O for data and metadata 00:11:29.301 05:21:43 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:11:29.301 05:21:43 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:29.301 05:21:43 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:29.301 05:21:43 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:29.301 00:11:29.301 real 0m0.081s 00:11:29.301 user 0m0.056s 00:11:29.301 sys 0m0.024s 00:11:29.301 05:21:43 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:29.301 05:21:43 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:11:29.301 ************************************ 00:11:29.301 END TEST dd_invalid_arguments 00:11:29.301 ************************************ 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:29.560 ************************************ 00:11:29.560 START TEST dd_double_input 00:11:29.560 ************************************ 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1127 -- # double_input 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:11:29.560 [2024-11-20 05:21:43.888629] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:29.560 00:11:29.560 real 0m0.079s 00:11:29.560 user 0m0.050s 00:11:29.560 sys 0m0.027s 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:11:29.560 ************************************ 00:11:29.560 END TEST dd_double_input 00:11:29.560 ************************************ 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:29.560 ************************************ 00:11:29.560 START TEST dd_double_output 00:11:29.560 ************************************ 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1127 -- # double_output 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:11:29.560 05:21:43 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:11:29.561 05:21:43 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.561 05:21:43 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:29.561 05:21:43 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.561 05:21:43 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:29.561 05:21:43 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.561 05:21:43 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:29.561 05:21:43 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.561 05:21:43 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:29.561 05:21:43 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:11:29.561 [2024-11-20 05:21:44.011391] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:11:29.561 05:21:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:11:29.561 05:21:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:29.561 05:21:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:29.561 05:21:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:29.561 00:11:29.561 real 0m0.073s 00:11:29.561 user 0m0.045s 00:11:29.561 sys 0m0.027s 00:11:29.561 05:21:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:29.561 ************************************ 00:11:29.561 END TEST dd_double_output 00:11:29.561 ************************************ 00:11:29.561 05:21:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:11:29.561 05:21:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:11:29.561 05:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:29.561 05:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:29.561 05:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:29.819 ************************************ 00:11:29.819 START TEST dd_no_input 00:11:29.819 ************************************ 00:11:29.819 05:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1127 -- # no_input 00:11:29.819 05:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:11:29.820 [2024-11-20 05:21:44.133125] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:29.820 00:11:29.820 real 0m0.074s 00:11:29.820 user 0m0.051s 00:11:29.820 sys 0m0.022s 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:11:29.820 ************************************ 00:11:29.820 END TEST dd_no_input 00:11:29.820 ************************************ 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:29.820 ************************************ 00:11:29.820 START TEST dd_no_output 00:11:29.820 ************************************ 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1127 -- # no_output 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:29.820 [2024-11-20 05:21:44.260281] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:29.820 00:11:29.820 real 0m0.077s 00:11:29.820 user 0m0.050s 00:11:29.820 sys 0m0.026s 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:11:29.820 ************************************ 00:11:29.820 END TEST dd_no_output 00:11:29.820 ************************************ 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:29.820 ************************************ 00:11:29.820 START TEST dd_wrong_blocksize 00:11:29.820 ************************************ 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1127 -- # wrong_blocksize 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:29.820 05:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:30.079 05:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:30.079 05:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:30.079 05:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:30.079 05:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:11:30.079 [2024-11-20 05:21:44.381814] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:11:30.079 05:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:11:30.079 05:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:30.079 05:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:30.079 05:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:30.079 00:11:30.079 real 0m0.074s 00:11:30.079 user 0m0.042s 00:11:30.079 sys 0m0.030s 00:11:30.079 05:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:30.079 05:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:11:30.079 ************************************ 00:11:30.079 END TEST dd_wrong_blocksize 00:11:30.079 ************************************ 00:11:30.079 05:21:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:11:30.079 05:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:30.079 05:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:30.079 05:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:30.079 ************************************ 00:11:30.079 START TEST dd_smaller_blocksize 00:11:30.079 ************************************ 00:11:30.079 05:21:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1127 -- # smaller_blocksize 00:11:30.079 05:21:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:11:30.079 05:21:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:11:30.079 05:21:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:11:30.079 05:21:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:30.079 05:21:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:30.079 05:21:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:30.079 05:21:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:30.079 05:21:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:30.079 05:21:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:30.079 05:21:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:30.079 05:21:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:30.079 05:21:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:11:30.079 [2024-11-20 05:21:44.501465] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:30.079 [2024-11-20 05:21:44.501569] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62049 ] 00:11:30.339 [2024-11-20 05:21:44.651734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.339 [2024-11-20 05:21:44.690625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.339 [2024-11-20 05:21:44.723198] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:30.598 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:11:30.856 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:11:30.856 [2024-11-20 05:21:45.265199] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:11:30.856 [2024-11-20 05:21:45.265263] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:30.856 [2024-11-20 05:21:45.332863] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:31.115 00:11:31.115 real 0m0.941s 00:11:31.115 user 0m0.334s 00:11:31.115 sys 0m0.500s 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:11:31.115 ************************************ 00:11:31.115 END TEST dd_smaller_blocksize 00:11:31.115 ************************************ 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:31.115 ************************************ 00:11:31.115 START TEST dd_invalid_count 00:11:31.115 ************************************ 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1127 -- # invalid_count 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:11:31.115 [2024-11-20 05:21:45.492868] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:31.115 00:11:31.115 real 0m0.075s 00:11:31.115 user 0m0.047s 00:11:31.115 sys 0m0.027s 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:11:31.115 ************************************ 00:11:31.115 END TEST dd_invalid_count 00:11:31.115 ************************************ 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:31.115 ************************************ 00:11:31.115 START TEST dd_invalid_oflag 00:11:31.115 ************************************ 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1127 -- # invalid_oflag 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:31.115 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:11:31.115 [2024-11-20 05:21:45.613647] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:31.374 00:11:31.374 real 0m0.076s 00:11:31.374 user 0m0.051s 00:11:31.374 sys 0m0.023s 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:11:31.374 ************************************ 00:11:31.374 END TEST dd_invalid_oflag 00:11:31.374 ************************************ 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:31.374 ************************************ 00:11:31.374 START TEST dd_invalid_iflag 00:11:31.374 ************************************ 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1127 -- # invalid_iflag 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:11:31.374 [2024-11-20 05:21:45.744650] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:31.374 00:11:31.374 real 0m0.080s 00:11:31.374 user 0m0.044s 00:11:31.374 sys 0m0.035s 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:31.374 ************************************ 00:11:31.374 END TEST dd_invalid_iflag 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:11:31.374 ************************************ 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:31.374 ************************************ 00:11:31.374 START TEST dd_unknown_flag 00:11:31.374 ************************************ 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1127 -- # unknown_flag 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:31.374 05:21:45 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:11:31.374 [2024-11-20 05:21:45.871804] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:31.374 [2024-11-20 05:21:45.871921] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62141 ] 00:11:31.633 [2024-11-20 05:21:46.022865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.633 [2024-11-20 05:21:46.062399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.633 [2024-11-20 05:21:46.095976] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:31.633 [2024-11-20 05:21:46.119338] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:11:31.633 [2024-11-20 05:21:46.119431] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:31.633 [2024-11-20 05:21:46.119496] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:11:31.633 [2024-11-20 05:21:46.119512] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:31.633 [2024-11-20 05:21:46.119803] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:11:31.633 [2024-11-20 05:21:46.119823] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:31.633 [2024-11-20 05:21:46.119892] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:11:31.633 [2024-11-20 05:21:46.119920] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:11:31.892 [2024-11-20 05:21:46.198133] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:31.892 05:21:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:11:31.892 05:21:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:31.892 05:21:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:11:31.892 05:21:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:11:31.892 05:21:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:11:31.892 05:21:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:31.892 00:11:31.892 real 0m0.450s 00:11:31.892 user 0m0.249s 00:11:31.892 sys 0m0.106s 00:11:31.892 05:21:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:31.892 05:21:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:11:31.892 ************************************ 00:11:31.892 END TEST dd_unknown_flag 00:11:31.892 ************************************ 00:11:31.892 05:21:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:11:31.892 05:21:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:31.892 05:21:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:31.892 05:21:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:31.892 ************************************ 00:11:31.892 START TEST dd_invalid_json 00:11:31.892 ************************************ 00:11:31.892 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1127 -- # invalid_json 00:11:31.892 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:11:31.892 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:11:31.892 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:11:31.892 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:11:31.892 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.892 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:31.892 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.892 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:31.892 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.892 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:31.892 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.892 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:31.892 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:11:31.892 [2024-11-20 05:21:46.373398] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:31.892 [2024-11-20 05:21:46.373953] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62170 ] 00:11:32.151 [2024-11-20 05:21:46.519287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.151 [2024-11-20 05:21:46.552839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.151 [2024-11-20 05:21:46.552970] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:11:32.151 [2024-11-20 05:21:46.552990] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:32.151 [2024-11-20 05:21:46.553000] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:32.151 [2024-11-20 05:21:46.553039] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:32.151 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:11:32.151 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:32.151 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:11:32.151 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:11:32.151 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:11:32.151 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:32.151 00:11:32.151 real 0m0.303s 00:11:32.151 user 0m0.143s 00:11:32.151 sys 0m0.055s 00:11:32.151 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:32.151 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:11:32.151 ************************************ 00:11:32.151 END TEST dd_invalid_json 00:11:32.151 ************************************ 00:11:32.151 05:21:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:11:32.151 05:21:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:32.151 05:21:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:32.151 05:21:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:32.410 ************************************ 00:11:32.410 START TEST dd_invalid_seek 00:11:32.410 ************************************ 00:11:32.410 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1127 -- # invalid_seek 00:11:32.410 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:11:32.410 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:11:32.410 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:11:32.410 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:11:32.410 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:11:32.410 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:11:32.410 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:11:32.410 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@650 -- # local es=0 00:11:32.410 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:11:32.410 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:11:32.410 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:11:32.410 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:11:32.410 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:32.410 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:32.410 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:32.410 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:32.410 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:32.410 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:32.410 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:32.410 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:32.410 05:21:46 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:11:32.410 { 00:11:32.410 "subsystems": [ 00:11:32.410 { 00:11:32.410 "subsystem": "bdev", 00:11:32.410 "config": [ 00:11:32.410 { 00:11:32.410 "params": { 00:11:32.410 "block_size": 512, 00:11:32.410 "num_blocks": 512, 00:11:32.410 "name": "malloc0" 00:11:32.410 }, 00:11:32.410 "method": "bdev_malloc_create" 00:11:32.410 }, 00:11:32.410 { 00:11:32.410 "params": { 00:11:32.410 "block_size": 512, 00:11:32.410 "num_blocks": 512, 00:11:32.410 "name": "malloc1" 00:11:32.410 }, 00:11:32.410 "method": "bdev_malloc_create" 00:11:32.410 }, 00:11:32.410 { 00:11:32.410 "method": "bdev_wait_for_examine" 00:11:32.410 } 00:11:32.410 ] 00:11:32.410 } 00:11:32.410 ] 00:11:32.410 } 00:11:32.410 [2024-11-20 05:21:46.753876] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:32.410 [2024-11-20 05:21:46.754014] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62199 ] 00:11:32.410 [2024-11-20 05:21:46.910423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.669 [2024-11-20 05:21:46.949391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.669 [2024-11-20 05:21:46.981937] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:32.669 [2024-11-20 05:21:47.030440] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:11:32.669 [2024-11-20 05:21:47.030522] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:32.669 [2024-11-20 05:21:47.106814] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:32.669 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # es=228 00:11:32.669 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:32.669 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@662 -- # es=100 00:11:32.669 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # case "$es" in 00:11:32.669 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@670 -- # es=1 00:11:32.669 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:32.669 00:11:32.669 real 0m0.510s 00:11:32.669 user 0m0.344s 00:11:32.669 sys 0m0.119s 00:11:32.669 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:32.669 ************************************ 00:11:32.669 END TEST dd_invalid_seek 00:11:32.669 ************************************ 00:11:32.669 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:11:32.955 05:21:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:11:32.955 05:21:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:32.955 05:21:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:32.955 05:21:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:32.955 ************************************ 00:11:32.955 START TEST dd_invalid_skip 00:11:32.955 ************************************ 00:11:32.955 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1127 -- # invalid_skip 00:11:32.955 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:11:32.955 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:11:32.955 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:11:32.955 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:11:32.955 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:11:32.955 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:11:32.955 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:11:32.955 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@650 -- # local es=0 00:11:32.955 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:11:32.955 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:11:32.955 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:32.955 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:11:32.955 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:11:32.955 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:32.955 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:32.955 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:32.955 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:32.955 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:32.955 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:32.955 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:32.955 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:11:32.955 [2024-11-20 05:21:47.276400] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:32.955 [2024-11-20 05:21:47.276482] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62227 ] 00:11:32.955 { 00:11:32.955 "subsystems": [ 00:11:32.955 { 00:11:32.955 "subsystem": "bdev", 00:11:32.955 "config": [ 00:11:32.955 { 00:11:32.955 "params": { 00:11:32.955 "block_size": 512, 00:11:32.955 "num_blocks": 512, 00:11:32.955 "name": "malloc0" 00:11:32.955 }, 00:11:32.955 "method": "bdev_malloc_create" 00:11:32.955 }, 00:11:32.955 { 00:11:32.955 "params": { 00:11:32.955 "block_size": 512, 00:11:32.955 "num_blocks": 512, 00:11:32.955 "name": "malloc1" 00:11:32.955 }, 00:11:32.955 "method": "bdev_malloc_create" 00:11:32.955 }, 00:11:32.955 { 00:11:32.955 "method": "bdev_wait_for_examine" 00:11:32.955 } 00:11:32.955 ] 00:11:32.955 } 00:11:32.955 ] 00:11:32.955 } 00:11:32.955 [2024-11-20 05:21:47.421673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.229 [2024-11-20 05:21:47.457340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.229 [2024-11-20 05:21:47.489178] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:33.229 [2024-11-20 05:21:47.536075] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:11:33.229 [2024-11-20 05:21:47.536144] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:33.229 [2024-11-20 05:21:47.606238] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # es=228 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@662 -- # es=100 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # case "$es" in 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@670 -- # es=1 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:33.229 00:11:33.229 real 0m0.441s 00:11:33.229 user 0m0.296s 00:11:33.229 sys 0m0.106s 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:11:33.229 ************************************ 00:11:33.229 END TEST dd_invalid_skip 00:11:33.229 ************************************ 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:33.229 ************************************ 00:11:33.229 START TEST dd_invalid_input_count 00:11:33.229 ************************************ 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1127 -- # invalid_input_count 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@650 -- # local es=0 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:33.229 05:21:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:11:33.489 [2024-11-20 05:21:47.762102] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:33.489 [2024-11-20 05:21:47.762188] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62266 ] 00:11:33.489 { 00:11:33.489 "subsystems": [ 00:11:33.489 { 00:11:33.489 "subsystem": "bdev", 00:11:33.489 "config": [ 00:11:33.489 { 00:11:33.489 "params": { 00:11:33.489 "block_size": 512, 00:11:33.489 "num_blocks": 512, 00:11:33.489 "name": "malloc0" 00:11:33.489 }, 00:11:33.489 "method": "bdev_malloc_create" 00:11:33.489 }, 00:11:33.489 { 00:11:33.489 "params": { 00:11:33.489 "block_size": 512, 00:11:33.489 "num_blocks": 512, 00:11:33.489 "name": "malloc1" 00:11:33.489 }, 00:11:33.489 "method": "bdev_malloc_create" 00:11:33.489 }, 00:11:33.489 { 00:11:33.489 "method": "bdev_wait_for_examine" 00:11:33.489 } 00:11:33.489 ] 00:11:33.489 } 00:11:33.489 ] 00:11:33.489 } 00:11:33.489 [2024-11-20 05:21:47.908686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.489 [2024-11-20 05:21:47.948450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.489 [2024-11-20 05:21:47.984694] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:33.748 [2024-11-20 05:21:48.034468] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:11:33.748 [2024-11-20 05:21:48.034551] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:33.748 [2024-11-20 05:21:48.108493] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:33.748 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # es=228 00:11:33.748 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:33.748 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@662 -- # es=100 00:11:33.748 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # case "$es" in 00:11:33.748 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@670 -- # es=1 00:11:33.748 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:33.748 00:11:33.748 real 0m0.459s 00:11:33.748 user 0m0.297s 00:11:33.748 sys 0m0.119s 00:11:33.748 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:33.748 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:11:33.748 ************************************ 00:11:33.748 END TEST dd_invalid_input_count 00:11:33.748 ************************************ 00:11:33.748 05:21:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:11:33.748 05:21:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:33.748 05:21:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:33.748 05:21:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:33.748 ************************************ 00:11:33.748 START TEST dd_invalid_output_count 00:11:33.748 ************************************ 00:11:33.748 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1127 -- # invalid_output_count 00:11:33.748 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:11:33.748 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:11:33.748 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:11:33.748 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:11:33.748 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@650 -- # local es=0 00:11:33.748 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:11:33.748 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:33.748 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:11:33.748 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:11:33.749 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:11:33.749 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:33.749 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:33.749 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:33.749 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:33.749 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:33.749 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:33.749 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:33.749 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:11:34.007 { 00:11:34.007 "subsystems": [ 00:11:34.007 { 00:11:34.007 "subsystem": "bdev", 00:11:34.007 "config": [ 00:11:34.007 { 00:11:34.007 "params": { 00:11:34.007 "block_size": 512, 00:11:34.007 "num_blocks": 512, 00:11:34.007 "name": "malloc0" 00:11:34.007 }, 00:11:34.007 "method": "bdev_malloc_create" 00:11:34.007 }, 00:11:34.007 { 00:11:34.007 "method": "bdev_wait_for_examine" 00:11:34.007 } 00:11:34.007 ] 00:11:34.007 } 00:11:34.007 ] 00:11:34.007 } 00:11:34.007 [2024-11-20 05:21:48.272454] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:34.007 [2024-11-20 05:21:48.272526] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62294 ] 00:11:34.007 [2024-11-20 05:21:48.416527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.007 [2024-11-20 05:21:48.449212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.007 [2024-11-20 05:21:48.478650] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:34.007 [2024-11-20 05:21:48.516839] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:11:34.007 [2024-11-20 05:21:48.516915] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:34.266 [2024-11-20 05:21:48.589281] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # es=228 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@662 -- # es=100 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # case "$es" in 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@670 -- # es=1 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:34.266 00:11:34.266 real 0m0.432s 00:11:34.266 user 0m0.290s 00:11:34.266 sys 0m0.094s 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:34.266 ************************************ 00:11:34.266 END TEST dd_invalid_output_count 00:11:34.266 ************************************ 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:34.266 ************************************ 00:11:34.266 START TEST dd_bs_not_multiple 00:11:34.266 ************************************ 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1127 -- # bs_not_multiple 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@650 -- # local es=0 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:34.266 05:21:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:11:34.266 { 00:11:34.266 "subsystems": [ 00:11:34.266 { 00:11:34.266 "subsystem": "bdev", 00:11:34.266 "config": [ 00:11:34.266 { 00:11:34.266 "params": { 00:11:34.266 "block_size": 512, 00:11:34.266 "num_blocks": 512, 00:11:34.266 "name": "malloc0" 00:11:34.266 }, 00:11:34.266 "method": "bdev_malloc_create" 00:11:34.266 }, 00:11:34.266 { 00:11:34.266 "params": { 00:11:34.266 "block_size": 512, 00:11:34.266 "num_blocks": 512, 00:11:34.266 "name": "malloc1" 00:11:34.266 }, 00:11:34.266 "method": "bdev_malloc_create" 00:11:34.266 }, 00:11:34.266 { 00:11:34.266 "method": "bdev_wait_for_examine" 00:11:34.266 } 00:11:34.266 ] 00:11:34.266 } 00:11:34.266 ] 00:11:34.266 } 00:11:34.266 [2024-11-20 05:21:48.758648] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:34.266 [2024-11-20 05:21:48.758776] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62331 ] 00:11:34.525 [2024-11-20 05:21:48.908069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.525 [2024-11-20 05:21:48.941026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.525 [2024-11-20 05:21:48.970314] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:34.525 [2024-11-20 05:21:49.016829] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:11:34.525 [2024-11-20 05:21:49.016892] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:34.783 [2024-11-20 05:21:49.084794] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:34.783 05:21:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # es=234 00:11:34.783 05:21:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:34.783 05:21:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@662 -- # es=106 00:11:34.783 05:21:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # case "$es" in 00:11:34.783 05:21:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@670 -- # es=1 00:11:34.783 05:21:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:34.783 00:11:34.783 real 0m0.446s 00:11:34.783 user 0m0.293s 00:11:34.783 sys 0m0.108s 00:11:34.783 05:21:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:34.783 05:21:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:11:34.783 ************************************ 00:11:34.783 END TEST dd_bs_not_multiple 00:11:34.783 ************************************ 00:11:34.783 00:11:34.783 real 0m5.676s 00:11:34.783 user 0m3.059s 00:11:34.783 sys 0m2.031s 00:11:34.783 05:21:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:34.783 05:21:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:34.783 ************************************ 00:11:34.783 END TEST spdk_dd_negative 00:11:34.783 ************************************ 00:11:34.783 00:11:34.783 real 1m9.119s 00:11:34.783 user 0m44.878s 00:11:34.783 sys 0m28.768s 00:11:34.783 05:21:49 spdk_dd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:34.783 05:21:49 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:34.783 ************************************ 00:11:34.783 END TEST spdk_dd 00:11:34.783 ************************************ 00:11:34.783 05:21:49 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:11:34.783 05:21:49 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:11:34.783 05:21:49 -- spdk/autotest.sh@256 -- # timing_exit lib 00:11:34.783 05:21:49 -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:34.783 05:21:49 -- common/autotest_common.sh@10 -- # set +x 00:11:34.783 05:21:49 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:11:34.783 05:21:49 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:11:34.783 05:21:49 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:11:34.783 05:21:49 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:11:34.783 05:21:49 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:11:34.783 05:21:49 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:11:34.783 05:21:49 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:11:34.783 05:21:49 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:35.041 05:21:49 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:35.041 05:21:49 -- common/autotest_common.sh@10 -- # set +x 00:11:35.041 ************************************ 00:11:35.041 START TEST nvmf_tcp 00:11:35.041 ************************************ 00:11:35.041 05:21:49 nvmf_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:11:35.041 * Looking for test storage... 00:11:35.041 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:35.041 05:21:49 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:35.041 05:21:49 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:11:35.041 05:21:49 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:35.041 05:21:49 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:35.041 05:21:49 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.041 05:21:49 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.041 05:21:49 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.041 05:21:49 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.041 05:21:49 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.041 05:21:49 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.041 05:21:49 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.041 05:21:49 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.041 05:21:49 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.041 05:21:49 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.041 05:21:49 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.041 05:21:49 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:11:35.041 05:21:49 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:11:35.041 05:21:49 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.041 05:21:49 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.041 05:21:49 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:11:35.041 05:21:49 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:11:35.041 05:21:49 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.041 05:21:49 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:11:35.041 05:21:49 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.041 05:21:49 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:11:35.041 05:21:49 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:11:35.041 05:21:49 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.041 05:21:49 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:11:35.041 05:21:49 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.041 05:21:49 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.041 05:21:49 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.041 05:21:49 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:11:35.041 05:21:49 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.041 05:21:49 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:35.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.041 --rc genhtml_branch_coverage=1 00:11:35.041 --rc genhtml_function_coverage=1 00:11:35.041 --rc genhtml_legend=1 00:11:35.041 --rc geninfo_all_blocks=1 00:11:35.041 --rc geninfo_unexecuted_blocks=1 00:11:35.041 00:11:35.041 ' 00:11:35.041 05:21:49 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:35.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.041 --rc genhtml_branch_coverage=1 00:11:35.041 --rc genhtml_function_coverage=1 00:11:35.041 --rc genhtml_legend=1 00:11:35.041 --rc geninfo_all_blocks=1 00:11:35.041 --rc geninfo_unexecuted_blocks=1 00:11:35.041 00:11:35.041 ' 00:11:35.041 05:21:49 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:35.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.041 --rc genhtml_branch_coverage=1 00:11:35.041 --rc genhtml_function_coverage=1 00:11:35.041 --rc genhtml_legend=1 00:11:35.041 --rc geninfo_all_blocks=1 00:11:35.041 --rc geninfo_unexecuted_blocks=1 00:11:35.041 00:11:35.041 ' 00:11:35.041 05:21:49 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:35.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.041 --rc genhtml_branch_coverage=1 00:11:35.041 --rc genhtml_function_coverage=1 00:11:35.041 --rc genhtml_legend=1 00:11:35.041 --rc geninfo_all_blocks=1 00:11:35.041 --rc geninfo_unexecuted_blocks=1 00:11:35.041 00:11:35.041 ' 00:11:35.041 05:21:49 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:11:35.041 05:21:49 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:11:35.041 05:21:49 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:11:35.041 05:21:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:35.041 05:21:49 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:35.041 05:21:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:35.041 ************************************ 00:11:35.041 START TEST nvmf_target_core 00:11:35.041 ************************************ 00:11:35.041 05:21:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:11:35.301 * Looking for test storage... 00:11:35.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:35.301 05:21:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:35.301 05:21:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:35.301 05:21:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:11:35.301 05:21:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:35.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.302 --rc genhtml_branch_coverage=1 00:11:35.302 --rc genhtml_function_coverage=1 00:11:35.302 --rc genhtml_legend=1 00:11:35.302 --rc geninfo_all_blocks=1 00:11:35.302 --rc geninfo_unexecuted_blocks=1 00:11:35.302 00:11:35.302 ' 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:35.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.302 --rc genhtml_branch_coverage=1 00:11:35.302 --rc genhtml_function_coverage=1 00:11:35.302 --rc genhtml_legend=1 00:11:35.302 --rc geninfo_all_blocks=1 00:11:35.302 --rc geninfo_unexecuted_blocks=1 00:11:35.302 00:11:35.302 ' 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:35.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.302 --rc genhtml_branch_coverage=1 00:11:35.302 --rc genhtml_function_coverage=1 00:11:35.302 --rc genhtml_legend=1 00:11:35.302 --rc geninfo_all_blocks=1 00:11:35.302 --rc geninfo_unexecuted_blocks=1 00:11:35.302 00:11:35.302 ' 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:35.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.302 --rc genhtml_branch_coverage=1 00:11:35.302 --rc genhtml_function_coverage=1 00:11:35.302 --rc genhtml_legend=1 00:11:35.302 --rc geninfo_all_blocks=1 00:11:35.302 --rc geninfo_unexecuted_blocks=1 00:11:35.302 00:11:35.302 ' 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.302 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.303 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:35.303 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:35.303 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:35.303 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:35.303 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:35.303 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:35.303 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:11:35.303 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:11:35.303 05:21:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:35.303 05:21:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:35.303 05:21:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:35.303 05:21:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:35.303 ************************************ 00:11:35.303 START TEST nvmf_host_management 00:11:35.303 ************************************ 00:11:35.303 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:35.303 * Looking for test storage... 00:11:35.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:35.303 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:35.303 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:11:35.303 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:35.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.564 --rc genhtml_branch_coverage=1 00:11:35.564 --rc genhtml_function_coverage=1 00:11:35.564 --rc genhtml_legend=1 00:11:35.564 --rc geninfo_all_blocks=1 00:11:35.564 --rc geninfo_unexecuted_blocks=1 00:11:35.564 00:11:35.564 ' 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:35.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.564 --rc genhtml_branch_coverage=1 00:11:35.564 --rc genhtml_function_coverage=1 00:11:35.564 --rc genhtml_legend=1 00:11:35.564 --rc geninfo_all_blocks=1 00:11:35.564 --rc geninfo_unexecuted_blocks=1 00:11:35.564 00:11:35.564 ' 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:35.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.564 --rc genhtml_branch_coverage=1 00:11:35.564 --rc genhtml_function_coverage=1 00:11:35.564 --rc genhtml_legend=1 00:11:35.564 --rc geninfo_all_blocks=1 00:11:35.564 --rc geninfo_unexecuted_blocks=1 00:11:35.564 00:11:35.564 ' 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:35.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.564 --rc genhtml_branch_coverage=1 00:11:35.564 --rc genhtml_function_coverage=1 00:11:35.564 --rc genhtml_legend=1 00:11:35.564 --rc geninfo_all_blocks=1 00:11:35.564 --rc geninfo_unexecuted_blocks=1 00:11:35.564 00:11:35.564 ' 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.564 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:35.565 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:35.565 Cannot find device "nvmf_init_br" 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:35.565 Cannot find device "nvmf_init_br2" 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:35.565 Cannot find device "nvmf_tgt_br" 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:35.565 Cannot find device "nvmf_tgt_br2" 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:35.565 Cannot find device "nvmf_init_br" 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:35.565 Cannot find device "nvmf_init_br2" 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:35.565 Cannot find device "nvmf_tgt_br" 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:11:35.565 05:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:35.565 Cannot find device "nvmf_tgt_br2" 00:11:35.565 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:11:35.565 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:35.565 Cannot find device "nvmf_br" 00:11:35.565 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:11:35.565 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:35.565 Cannot find device "nvmf_init_if" 00:11:35.565 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:11:35.565 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:35.565 Cannot find device "nvmf_init_if2" 00:11:35.565 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:11:35.565 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:35.565 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:35.565 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:11:35.565 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:35.565 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:35.565 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:11:35.565 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:35.565 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:35.823 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:35.823 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:35.823 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:35.823 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:35.823 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:35.823 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:35.823 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:35.824 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:35.824 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:35.824 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:35.824 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:35.824 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:35.824 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:35.824 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:35.824 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:35.824 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:35.824 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:35.824 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:35.824 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:36.082 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:36.082 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:11:36.082 00:11:36.082 --- 10.0.0.3 ping statistics --- 00:11:36.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.082 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:36.082 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:36.082 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.088 ms 00:11:36.082 00:11:36.082 --- 10.0.0.4 ping statistics --- 00:11:36.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.082 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:36.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:36.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:11:36.082 00:11:36.082 --- 10.0.0.1 ping statistics --- 00:11:36.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.082 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:36.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:36.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:11:36.082 00:11:36.082 --- 10.0.0.2 ping statistics --- 00:11:36.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.082 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:36.082 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62668 00:11:36.083 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:36.083 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62668 00:11:36.083 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 62668 ']' 00:11:36.083 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.083 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:36.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.083 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.083 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:36.083 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:36.083 [2024-11-20 05:21:50.543238] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:36.083 [2024-11-20 05:21:50.543352] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:36.341 [2024-11-20 05:21:50.697523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:36.341 [2024-11-20 05:21:50.739986] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:36.341 [2024-11-20 05:21:50.740047] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:36.341 [2024-11-20 05:21:50.740062] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:36.341 [2024-11-20 05:21:50.740072] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:36.341 [2024-11-20 05:21:50.740081] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:36.341 [2024-11-20 05:21:50.740932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:36.341 [2024-11-20 05:21:50.741062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:36.341 [2024-11-20 05:21:50.741304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:36.341 [2024-11-20 05:21:50.741312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.341 [2024-11-20 05:21:50.773675] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:36.341 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:36.341 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:11:36.341 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:36.341 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:36.341 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:36.600 [2024-11-20 05:21:50.864574] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:36.600 Malloc0 00:11:36.600 [2024-11-20 05:21:50.934765] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62714 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62714 /var/tmp/bdevperf.sock 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 62714 ']' 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:36.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:36.600 { 00:11:36.600 "params": { 00:11:36.600 "name": "Nvme$subsystem", 00:11:36.600 "trtype": "$TEST_TRANSPORT", 00:11:36.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:36.600 "adrfam": "ipv4", 00:11:36.600 "trsvcid": "$NVMF_PORT", 00:11:36.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:36.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:36.600 "hdgst": ${hdgst:-false}, 00:11:36.600 "ddgst": ${ddgst:-false} 00:11:36.600 }, 00:11:36.600 "method": "bdev_nvme_attach_controller" 00:11:36.600 } 00:11:36.600 EOF 00:11:36.600 )") 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:11:36.600 05:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:36.600 "params": { 00:11:36.600 "name": "Nvme0", 00:11:36.600 "trtype": "tcp", 00:11:36.600 "traddr": "10.0.0.3", 00:11:36.600 "adrfam": "ipv4", 00:11:36.600 "trsvcid": "4420", 00:11:36.600 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:36.600 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:36.600 "hdgst": false, 00:11:36.600 "ddgst": false 00:11:36.600 }, 00:11:36.600 "method": "bdev_nvme_attach_controller" 00:11:36.600 }' 00:11:36.600 [2024-11-20 05:21:51.065551] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:36.600 [2024-11-20 05:21:51.065687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62714 ] 00:11:36.860 [2024-11-20 05:21:51.228843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.860 [2024-11-20 05:21:51.288524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.860 [2024-11-20 05:21:51.330570] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:37.119 Running I/O for 10 seconds... 00:11:37.703 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:37.703 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:11:37.703 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:37.703 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.703 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:37.703 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.703 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:37.703 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:37.703 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:37.703 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:37.703 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:37.703 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:37.703 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:37.703 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:37.703 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:37.703 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:37.703 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.703 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:37.703 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.703 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=969 00:11:37.703 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 969 -ge 100 ']' 00:11:37.703 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:37.703 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:37.703 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:37.703 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:37.703 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.703 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:37.703 [2024-11-20 05:21:52.205503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.205564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.205592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.205604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.205617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.205627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.205638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.205648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.205660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.205669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.205681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.205691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.205703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.205713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.205724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.205734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.205745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.205754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.205766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.205776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.205787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.205797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.205809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.205819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.205830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.205839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.205852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.205861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.205873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.205882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.205918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.205930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.205942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.205951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.205965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.205975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.205987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.205996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.206008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.206018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.206030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.206039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.206051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.206060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.206072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.206082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.206093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.206102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.206114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.206124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.206135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.206146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.206157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.206167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.206178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.206188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.206200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.206209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.206221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.206231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.206243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.206253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.206267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.206277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.206288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.206298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.206319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.206330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.206342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.206351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.206363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.206373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.206384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.206394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.206405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.206415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.206427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.206436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.206448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.703 [2024-11-20 05:21:52.206457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.703 [2024-11-20 05:21:52.206469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.704 [2024-11-20 05:21:52.206478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.704 [2024-11-20 05:21:52.206490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.704 [2024-11-20 05:21:52.206499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.704 [2024-11-20 05:21:52.206511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.704 [2024-11-20 05:21:52.206520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.704 [2024-11-20 05:21:52.206532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.704 [2024-11-20 05:21:52.206541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.704 [2024-11-20 05:21:52.206553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.704 [2024-11-20 05:21:52.206562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.704 [2024-11-20 05:21:52.206574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.704 [2024-11-20 05:21:52.206583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.704 [2024-11-20 05:21:52.206595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.704 [2024-11-20 05:21:52.206604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.704 [2024-11-20 05:21:52.206617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.704 [2024-11-20 05:21:52.206627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.704 [2024-11-20 05:21:52.206638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.704 [2024-11-20 05:21:52.206648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.704 [2024-11-20 05:21:52.206660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.704 [2024-11-20 05:21:52.206669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.704 [2024-11-20 05:21:52.206682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.704 [2024-11-20 05:21:52.206691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.704 [2024-11-20 05:21:52.206704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.704 [2024-11-20 05:21:52.206714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.704 [2024-11-20 05:21:52.206726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.704 [2024-11-20 05:21:52.206735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.704 [2024-11-20 05:21:52.206747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.704 [2024-11-20 05:21:52.206757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.704 [2024-11-20 05:21:52.206768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.704 [2024-11-20 05:21:52.206777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.704 [2024-11-20 05:21:52.206789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.704 [2024-11-20 05:21:52.206798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.704 [2024-11-20 05:21:52.206810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.704 [2024-11-20 05:21:52.206820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.704 [2024-11-20 05:21:52.206831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.704 [2024-11-20 05:21:52.206841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.704 [2024-11-20 05:21:52.206852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.704 [2024-11-20 05:21:52.206862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.704 [2024-11-20 05:21:52.206874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.704 [2024-11-20 05:21:52.206883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.704 [2024-11-20 05:21:52.206894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.704 [2024-11-20 05:21:52.206915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.704 [2024-11-20 05:21:52.206928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.704 [2024-11-20 05:21:52.206938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.704 [2024-11-20 05:21:52.206949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.704 [2024-11-20 05:21:52.206958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.704 [2024-11-20 05:21:52.206973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.704 [2024-11-20 05:21:52.206982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.704 [2024-11-20 05:21:52.206997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff3da0 is same with the state(6) to be set 00:11:37.704 [2024-11-20 05:21:52.207156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.704 [2024-11-20 05:21:52.207180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.704 [2024-11-20 05:21:52.207193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.704 [2024-11-20 05:21:52.207202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.704 [2024-11-20 05:21:52.207213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.704 [2024-11-20 05:21:52.207222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.704 [2024-11-20 05:21:52.207232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.704 [2024-11-20 05:21:52.207241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.704 [2024-11-20 05:21:52.207251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff4ce0 is same with the state(6) to be set 00:11:37.704 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.704 [2024-11-20 05:21:52.208404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:11:37.704 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:37.704 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.704 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:37.704 task offset: 8192 on job bdev=Nvme0n1 fails 00:11:37.704 00:11:37.704 Latency(us) 00:11:37.704 [2024-11-20T05:21:52.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:37.704 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:37.704 Job: Nvme0n1 ended in about 0.77 seconds with error 00:11:37.704 Verification LBA range: start 0x0 length 0x400 00:11:37.704 Nvme0n1 : 0.77 1417.40 88.59 83.38 0.00 41518.29 2234.18 44326.17 00:11:37.704 [2024-11-20T05:21:52.217Z] =================================================================================================================== 00:11:37.704 [2024-11-20T05:21:52.217Z] Total : 1417.40 88.59 83.38 0.00 41518.29 2234.18 44326.17 00:11:37.704 [2024-11-20 05:21:52.210577] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:37.704 [2024-11-20 05:21:52.210615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff4ce0 (9): Bad file descriptor 00:11:38.005 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.005 05:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:38.005 [2024-11-20 05:21:52.217385] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:11:38.982 05:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62714 00:11:38.982 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62714) - No such process 00:11:38.982 05:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:38.982 05:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:38.982 05:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:38.982 05:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:38.982 05:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:11:38.982 05:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:11:38.982 05:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:38.982 05:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:38.982 { 00:11:38.982 "params": { 00:11:38.982 "name": "Nvme$subsystem", 00:11:38.982 "trtype": "$TEST_TRANSPORT", 00:11:38.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:38.982 "adrfam": "ipv4", 00:11:38.982 "trsvcid": "$NVMF_PORT", 00:11:38.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:38.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:38.982 "hdgst": ${hdgst:-false}, 00:11:38.982 "ddgst": ${ddgst:-false} 00:11:38.982 }, 00:11:38.982 "method": "bdev_nvme_attach_controller" 00:11:38.982 } 00:11:38.982 EOF 00:11:38.982 )") 00:11:38.982 05:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:11:38.982 05:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:11:38.982 05:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:11:38.982 05:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:38.982 "params": { 00:11:38.982 "name": "Nvme0", 00:11:38.982 "trtype": "tcp", 00:11:38.982 "traddr": "10.0.0.3", 00:11:38.982 "adrfam": "ipv4", 00:11:38.982 "trsvcid": "4420", 00:11:38.982 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:38.982 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:38.982 "hdgst": false, 00:11:38.982 "ddgst": false 00:11:38.982 }, 00:11:38.982 "method": "bdev_nvme_attach_controller" 00:11:38.982 }' 00:11:38.982 [2024-11-20 05:21:53.274958] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:38.982 [2024-11-20 05:21:53.275046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62752 ] 00:11:38.982 [2024-11-20 05:21:53.417283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.982 [2024-11-20 05:21:53.465860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.240 [2024-11-20 05:21:53.511414] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:39.240 Running I/O for 1 seconds... 00:11:40.176 1408.00 IOPS, 88.00 MiB/s 00:11:40.176 Latency(us) 00:11:40.176 [2024-11-20T05:21:54.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:40.176 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:40.176 Verification LBA range: start 0x0 length 0x400 00:11:40.176 Nvme0n1 : 1.01 1461.74 91.36 0.00 0.00 42782.89 4915.20 43372.92 00:11:40.176 [2024-11-20T05:21:54.689Z] =================================================================================================================== 00:11:40.176 [2024-11-20T05:21:54.689Z] Total : 1461.74 91.36 0.00 0.00 42782.89 4915.20 43372.92 00:11:40.435 05:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:40.435 05:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:40.435 05:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:11:40.435 05:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:11:40.435 05:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:40.435 05:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:40.435 05:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:11:40.435 05:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:40.435 05:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:11:40.435 05:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:40.435 05:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:40.435 rmmod nvme_tcp 00:11:40.435 rmmod nvme_fabrics 00:11:40.435 rmmod nvme_keyring 00:11:40.435 05:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:40.435 05:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:11:40.435 05:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:11:40.435 05:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62668 ']' 00:11:40.435 05:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62668 00:11:40.435 05:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 62668 ']' 00:11:40.435 05:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 62668 00:11:40.435 05:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:11:40.435 05:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:40.435 05:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62668 00:11:40.435 05:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:11:40.435 05:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:11:40.435 killing process with pid 62668 00:11:40.435 05:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62668' 00:11:40.435 05:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 62668 00:11:40.435 05:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 62668 00:11:40.694 [2024-11-20 05:21:55.083593] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:40.694 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:40.694 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:40.694 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:40.694 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:11:40.694 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:11:40.694 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:40.694 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:11:40.694 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:40.694 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:40.694 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:40.694 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:40.694 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:40.694 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:40.694 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:40.694 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:40.694 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:40.694 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:40.694 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:40.954 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:40.954 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:40.954 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:40.954 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:40.954 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:40.954 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.954 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.954 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.954 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:11:40.954 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:40.954 00:11:40.954 real 0m5.653s 00:11:40.954 user 0m20.153s 00:11:40.954 sys 0m1.530s 00:11:40.954 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:40.954 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:40.954 ************************************ 00:11:40.954 END TEST nvmf_host_management 00:11:40.954 ************************************ 00:11:40.954 05:21:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:40.954 05:21:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:40.954 05:21:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:40.954 05:21:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:40.954 ************************************ 00:11:40.954 START TEST nvmf_lvol 00:11:40.954 ************************************ 00:11:40.954 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:41.214 * Looking for test storage... 00:11:41.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:41.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.214 --rc genhtml_branch_coverage=1 00:11:41.214 --rc genhtml_function_coverage=1 00:11:41.214 --rc genhtml_legend=1 00:11:41.214 --rc geninfo_all_blocks=1 00:11:41.214 --rc geninfo_unexecuted_blocks=1 00:11:41.214 00:11:41.214 ' 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:41.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.214 --rc genhtml_branch_coverage=1 00:11:41.214 --rc genhtml_function_coverage=1 00:11:41.214 --rc genhtml_legend=1 00:11:41.214 --rc geninfo_all_blocks=1 00:11:41.214 --rc geninfo_unexecuted_blocks=1 00:11:41.214 00:11:41.214 ' 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:41.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.214 --rc genhtml_branch_coverage=1 00:11:41.214 --rc genhtml_function_coverage=1 00:11:41.214 --rc genhtml_legend=1 00:11:41.214 --rc geninfo_all_blocks=1 00:11:41.214 --rc geninfo_unexecuted_blocks=1 00:11:41.214 00:11:41.214 ' 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:41.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.214 --rc genhtml_branch_coverage=1 00:11:41.214 --rc genhtml_function_coverage=1 00:11:41.214 --rc genhtml_legend=1 00:11:41.214 --rc geninfo_all_blocks=1 00:11:41.214 --rc geninfo_unexecuted_blocks=1 00:11:41.214 00:11:41.214 ' 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:41.214 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:41.215 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:41.215 Cannot find device "nvmf_init_br" 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:41.215 Cannot find device "nvmf_init_br2" 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:41.215 Cannot find device "nvmf_tgt_br" 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:41.215 Cannot find device "nvmf_tgt_br2" 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:41.215 Cannot find device "nvmf_init_br" 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:41.215 Cannot find device "nvmf_init_br2" 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:41.215 Cannot find device "nvmf_tgt_br" 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:41.215 Cannot find device "nvmf_tgt_br2" 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:41.215 Cannot find device "nvmf_br" 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:11:41.215 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:41.474 Cannot find device "nvmf_init_if" 00:11:41.474 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:11:41.474 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:41.474 Cannot find device "nvmf_init_if2" 00:11:41.474 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:11:41.474 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:41.474 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:41.474 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:11:41.474 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:41.474 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:41.474 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:11:41.474 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:41.474 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:41.474 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:41.474 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:41.474 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:41.474 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:41.474 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:41.474 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:41.474 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:41.474 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:41.474 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:41.475 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:41.475 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:41.475 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:41.475 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:41.475 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:41.475 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:41.475 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:41.475 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:41.475 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:41.475 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:41.475 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:41.475 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:41.475 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:41.475 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:41.734 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:41.734 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:41.734 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:41.734 05:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:41.734 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:41.734 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:41.734 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:41.734 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:41.734 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:41.734 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:11:41.734 00:11:41.734 --- 10.0.0.3 ping statistics --- 00:11:41.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.734 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:11:41.734 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:41.734 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:41.734 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:11:41.734 00:11:41.734 --- 10.0.0.4 ping statistics --- 00:11:41.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.734 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:11:41.734 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:41.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:41.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:11:41.734 00:11:41.734 --- 10.0.0.1 ping statistics --- 00:11:41.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.734 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:11:41.734 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:41.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:41.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:11:41.734 00:11:41.734 --- 10.0.0.2 ping statistics --- 00:11:41.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.734 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:11:41.734 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:41.734 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:11:41.734 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:41.734 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:41.734 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:41.734 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:41.734 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:41.734 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:41.734 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:41.734 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:41.734 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:41.734 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:41.734 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:41.734 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=63022 00:11:41.734 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 63022 00:11:41.734 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 63022 ']' 00:11:41.734 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.734 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:41.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.734 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.734 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:41.734 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:41.734 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:41.734 [2024-11-20 05:21:56.111063] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:41.734 [2024-11-20 05:21:56.111572] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.993 [2024-11-20 05:21:56.258172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:41.993 [2024-11-20 05:21:56.292066] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.993 [2024-11-20 05:21:56.292121] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.993 [2024-11-20 05:21:56.292133] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.993 [2024-11-20 05:21:56.292141] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.993 [2024-11-20 05:21:56.292148] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.993 [2024-11-20 05:21:56.292949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.993 [2024-11-20 05:21:56.293050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.993 [2024-11-20 05:21:56.293056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.993 [2024-11-20 05:21:56.323553] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:41.993 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:41.993 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:11:41.993 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:41.993 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:41.993 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:41.993 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:41.993 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:42.251 [2024-11-20 05:21:56.708802] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.251 05:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:42.507 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:42.806 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:43.117 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:43.117 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:43.376 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:43.635 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c090d676-cb53-44c5-a0d9-62b4760dba92 00:11:43.635 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c090d676-cb53-44c5-a0d9-62b4760dba92 lvol 20 00:11:43.894 05:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=360752c0-f1a8-44ca-ac67-eeb06b848b32 00:11:43.894 05:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:44.153 05:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 360752c0-f1a8-44ca-ac67-eeb06b848b32 00:11:44.411 05:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:11:44.670 [2024-11-20 05:21:59.126831] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:44.670 05:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:44.927 05:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=63090 00:11:44.927 05:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:44.927 05:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:46.301 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 360752c0-f1a8-44ca-ac67-eeb06b848b32 MY_SNAPSHOT 00:11:46.301 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a13dd451-6009-4a74-9c9a-1d7da21549f6 00:11:46.301 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 360752c0-f1a8-44ca-ac67-eeb06b848b32 30 00:11:46.866 05:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone a13dd451-6009-4a74-9c9a-1d7da21549f6 MY_CLONE 00:11:47.124 05:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=55a69ceb-ce2d-41c4-8aa7-2ed7e47b6d39 00:11:47.124 05:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 55a69ceb-ce2d-41c4-8aa7-2ed7e47b6d39 00:11:47.748 05:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 63090 00:11:55.860 Initializing NVMe Controllers 00:11:55.860 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:11:55.860 Controller IO queue size 128, less than required. 00:11:55.860 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:55.860 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:55.860 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:55.860 Initialization complete. Launching workers. 00:11:55.860 ======================================================== 00:11:55.860 Latency(us) 00:11:55.860 Device Information : IOPS MiB/s Average min max 00:11:55.860 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10005.00 39.08 12797.69 1706.81 74694.23 00:11:55.860 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9817.60 38.35 13040.05 2634.77 75605.97 00:11:55.860 ======================================================== 00:11:55.860 Total : 19822.60 77.43 12917.72 1706.81 75605.97 00:11:55.860 00:11:55.860 05:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:55.860 05:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 360752c0-f1a8-44ca-ac67-eeb06b848b32 00:11:55.860 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c090d676-cb53-44c5-a0d9-62b4760dba92 00:11:56.427 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:56.427 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:56.427 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:56.427 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:56.427 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:11:56.427 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:56.427 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:11:56.427 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:56.427 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:56.427 rmmod nvme_tcp 00:11:56.427 rmmod nvme_fabrics 00:11:56.427 rmmod nvme_keyring 00:11:56.427 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:56.427 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:11:56.427 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:11:56.427 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 63022 ']' 00:11:56.427 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 63022 00:11:56.427 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 63022 ']' 00:11:56.427 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 63022 00:11:56.427 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:11:56.427 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:56.427 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63022 00:11:56.427 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:56.427 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:56.427 killing process with pid 63022 00:11:56.427 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63022' 00:11:56.427 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 63022 00:11:56.427 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 63022 00:11:56.685 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:56.685 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:56.685 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:56.685 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:11:56.685 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:11:56.685 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:56.685 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:11:56.685 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:56.685 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:56.685 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:56.685 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:56.685 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:56.685 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:56.685 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:56.685 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:56.685 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:56.685 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:56.685 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:56.685 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:56.685 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:56.685 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:56.685 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:56.685 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:56.685 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.685 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.685 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.943 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:11:56.943 00:11:56.943 real 0m15.799s 00:11:56.943 user 1m5.190s 00:11:56.943 sys 0m4.250s 00:11:56.943 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:56.943 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:56.943 ************************************ 00:11:56.943 END TEST nvmf_lvol 00:11:56.943 ************************************ 00:11:56.943 05:22:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:56.944 ************************************ 00:11:56.944 START TEST nvmf_lvs_grow 00:11:56.944 ************************************ 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:56.944 * Looking for test storage... 00:11:56.944 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:56.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.944 --rc genhtml_branch_coverage=1 00:11:56.944 --rc genhtml_function_coverage=1 00:11:56.944 --rc genhtml_legend=1 00:11:56.944 --rc geninfo_all_blocks=1 00:11:56.944 --rc geninfo_unexecuted_blocks=1 00:11:56.944 00:11:56.944 ' 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:56.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.944 --rc genhtml_branch_coverage=1 00:11:56.944 --rc genhtml_function_coverage=1 00:11:56.944 --rc genhtml_legend=1 00:11:56.944 --rc geninfo_all_blocks=1 00:11:56.944 --rc geninfo_unexecuted_blocks=1 00:11:56.944 00:11:56.944 ' 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:56.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.944 --rc genhtml_branch_coverage=1 00:11:56.944 --rc genhtml_function_coverage=1 00:11:56.944 --rc genhtml_legend=1 00:11:56.944 --rc geninfo_all_blocks=1 00:11:56.944 --rc geninfo_unexecuted_blocks=1 00:11:56.944 00:11:56.944 ' 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:56.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.944 --rc genhtml_branch_coverage=1 00:11:56.944 --rc genhtml_function_coverage=1 00:11:56.944 --rc genhtml_legend=1 00:11:56.944 --rc geninfo_all_blocks=1 00:11:56.944 --rc geninfo_unexecuted_blocks=1 00:11:56.944 00:11:56.944 ' 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.944 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:56.945 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:56.945 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:57.204 Cannot find device "nvmf_init_br" 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:57.204 Cannot find device "nvmf_init_br2" 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:57.204 Cannot find device "nvmf_tgt_br" 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:57.204 Cannot find device "nvmf_tgt_br2" 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:57.204 Cannot find device "nvmf_init_br" 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:57.204 Cannot find device "nvmf_init_br2" 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:57.204 Cannot find device "nvmf_tgt_br" 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:57.204 Cannot find device "nvmf_tgt_br2" 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:57.204 Cannot find device "nvmf_br" 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:57.204 Cannot find device "nvmf_init_if" 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:57.204 Cannot find device "nvmf_init_if2" 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:57.204 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:57.204 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:57.204 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:57.463 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:57.463 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:11:57.463 00:11:57.463 --- 10.0.0.3 ping statistics --- 00:11:57.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.463 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:57.463 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:57.463 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:11:57.463 00:11:57.463 --- 10.0.0.4 ping statistics --- 00:11:57.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.463 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:57.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:57.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:11:57.463 00:11:57.463 --- 10.0.0.1 ping statistics --- 00:11:57.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.463 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:57.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:57.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:11:57.463 00:11:57.463 --- 10.0.0.2 ping statistics --- 00:11:57.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.463 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63469 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63469 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 63469 ']' 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:57.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:57.463 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:57.463 [2024-11-20 05:22:11.901142] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:11:57.463 [2024-11-20 05:22:11.901287] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.721 [2024-11-20 05:22:12.060292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.721 [2024-11-20 05:22:12.092253] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.721 [2024-11-20 05:22:12.092306] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.721 [2024-11-20 05:22:12.092317] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.721 [2024-11-20 05:22:12.092325] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.722 [2024-11-20 05:22:12.092333] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.722 [2024-11-20 05:22:12.092627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.722 [2024-11-20 05:22:12.121526] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:57.722 05:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:57.722 05:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:11:57.722 05:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:57.722 05:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:57.722 05:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:57.722 05:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:57.722 05:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:57.980 [2024-11-20 05:22:12.453885] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:57.980 05:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:57.980 05:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:57.980 05:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:57.980 05:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:57.980 ************************************ 00:11:57.980 START TEST lvs_grow_clean 00:11:57.980 ************************************ 00:11:57.980 05:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:11:57.980 05:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:57.980 05:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:57.980 05:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:57.980 05:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:57.980 05:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:57.980 05:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:57.980 05:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:57.980 05:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:58.238 05:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:58.496 05:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:58.496 05:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:58.754 05:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c81d96fa-60b6-4819-957b-409289409ef2 00:11:58.754 05:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c81d96fa-60b6-4819-957b-409289409ef2 00:11:58.754 05:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:59.011 05:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:59.011 05:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:59.011 05:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c81d96fa-60b6-4819-957b-409289409ef2 lvol 150 00:11:59.280 05:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=bb5e995f-1a8b-4212-ba63-b1f7a22577aa 00:11:59.280 05:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:59.280 05:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:59.539 [2024-11-20 05:22:13.869803] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:59.539 [2024-11-20 05:22:13.869890] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:59.539 true 00:11:59.539 05:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c81d96fa-60b6-4819-957b-409289409ef2 00:11:59.539 05:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:59.798 05:22:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:59.798 05:22:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:00.057 05:22:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bb5e995f-1a8b-4212-ba63-b1f7a22577aa 00:12:00.317 05:22:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:12:00.576 [2024-11-20 05:22:14.994462] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:00.576 05:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:00.834 05:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63549 00:12:00.834 05:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:00.834 05:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:00.834 05:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63549 /var/tmp/bdevperf.sock 00:12:00.834 05:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 63549 ']' 00:12:00.834 05:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:00.834 05:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:00.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:00.834 05:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:00.834 05:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:00.834 05:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:00.834 [2024-11-20 05:22:15.334108] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:12:00.834 [2024-11-20 05:22:15.334226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63549 ] 00:12:01.092 [2024-11-20 05:22:15.480997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.092 [2024-11-20 05:22:15.513943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.092 [2024-11-20 05:22:15.542934] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:01.092 05:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:01.092 05:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:12:01.092 05:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:01.659 Nvme0n1 00:12:01.659 05:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:01.955 [ 00:12:01.955 { 00:12:01.955 "name": "Nvme0n1", 00:12:01.955 "aliases": [ 00:12:01.955 "bb5e995f-1a8b-4212-ba63-b1f7a22577aa" 00:12:01.955 ], 00:12:01.955 "product_name": "NVMe disk", 00:12:01.955 "block_size": 4096, 00:12:01.955 "num_blocks": 38912, 00:12:01.955 "uuid": "bb5e995f-1a8b-4212-ba63-b1f7a22577aa", 00:12:01.955 "numa_id": -1, 00:12:01.955 "assigned_rate_limits": { 00:12:01.955 "rw_ios_per_sec": 0, 00:12:01.955 "rw_mbytes_per_sec": 0, 00:12:01.955 "r_mbytes_per_sec": 0, 00:12:01.955 "w_mbytes_per_sec": 0 00:12:01.955 }, 00:12:01.955 "claimed": false, 00:12:01.955 "zoned": false, 00:12:01.955 "supported_io_types": { 00:12:01.955 "read": true, 00:12:01.955 "write": true, 00:12:01.955 "unmap": true, 00:12:01.955 "flush": true, 00:12:01.955 "reset": true, 00:12:01.955 "nvme_admin": true, 00:12:01.955 "nvme_io": true, 00:12:01.955 "nvme_io_md": false, 00:12:01.955 "write_zeroes": true, 00:12:01.955 "zcopy": false, 00:12:01.955 "get_zone_info": false, 00:12:01.955 "zone_management": false, 00:12:01.955 "zone_append": false, 00:12:01.955 "compare": true, 00:12:01.955 "compare_and_write": true, 00:12:01.955 "abort": true, 00:12:01.955 "seek_hole": false, 00:12:01.955 "seek_data": false, 00:12:01.955 "copy": true, 00:12:01.955 "nvme_iov_md": false 00:12:01.955 }, 00:12:01.955 "memory_domains": [ 00:12:01.955 { 00:12:01.955 "dma_device_id": "system", 00:12:01.955 "dma_device_type": 1 00:12:01.955 } 00:12:01.955 ], 00:12:01.955 "driver_specific": { 00:12:01.955 "nvme": [ 00:12:01.955 { 00:12:01.955 "trid": { 00:12:01.955 "trtype": "TCP", 00:12:01.955 "adrfam": "IPv4", 00:12:01.955 "traddr": "10.0.0.3", 00:12:01.955 "trsvcid": "4420", 00:12:01.955 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:01.955 }, 00:12:01.955 "ctrlr_data": { 00:12:01.955 "cntlid": 1, 00:12:01.955 "vendor_id": "0x8086", 00:12:01.955 "model_number": "SPDK bdev Controller", 00:12:01.955 "serial_number": "SPDK0", 00:12:01.955 "firmware_revision": "25.01", 00:12:01.955 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:01.956 "oacs": { 00:12:01.956 "security": 0, 00:12:01.956 "format": 0, 00:12:01.956 "firmware": 0, 00:12:01.956 "ns_manage": 0 00:12:01.956 }, 00:12:01.956 "multi_ctrlr": true, 00:12:01.956 "ana_reporting": false 00:12:01.956 }, 00:12:01.956 "vs": { 00:12:01.956 "nvme_version": "1.3" 00:12:01.956 }, 00:12:01.956 "ns_data": { 00:12:01.956 "id": 1, 00:12:01.956 "can_share": true 00:12:01.956 } 00:12:01.956 } 00:12:01.956 ], 00:12:01.956 "mp_policy": "active_passive" 00:12:01.956 } 00:12:01.956 } 00:12:01.956 ] 00:12:01.956 05:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63561 00:12:01.956 05:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:01.956 05:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:01.956 Running I/O for 10 seconds... 00:12:02.918 Latency(us) 00:12:02.918 [2024-11-20T05:22:17.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:02.918 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:02.918 Nvme0n1 : 1.00 6698.00 26.16 0.00 0.00 0.00 0.00 0.00 00:12:02.918 [2024-11-20T05:22:17.431Z] =================================================================================================================== 00:12:02.918 [2024-11-20T05:22:17.431Z] Total : 6698.00 26.16 0.00 0.00 0.00 0.00 0.00 00:12:02.918 00:12:03.852 05:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c81d96fa-60b6-4819-957b-409289409ef2 00:12:03.852 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:03.852 Nvme0n1 : 2.00 6841.50 26.72 0.00 0.00 0.00 0.00 0.00 00:12:03.852 [2024-11-20T05:22:18.365Z] =================================================================================================================== 00:12:03.852 [2024-11-20T05:22:18.365Z] Total : 6841.50 26.72 0.00 0.00 0.00 0.00 0.00 00:12:03.852 00:12:04.111 true 00:12:04.111 05:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:04.111 05:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c81d96fa-60b6-4819-957b-409289409ef2 00:12:04.369 05:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:04.369 05:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:04.369 05:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63561 00:12:04.933 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:04.933 Nvme0n1 : 3.00 6889.33 26.91 0.00 0.00 0.00 0.00 0.00 00:12:04.933 [2024-11-20T05:22:19.446Z] =================================================================================================================== 00:12:04.933 [2024-11-20T05:22:19.446Z] Total : 6889.33 26.91 0.00 0.00 0.00 0.00 0.00 00:12:04.933 00:12:05.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:05.868 Nvme0n1 : 4.00 6881.50 26.88 0.00 0.00 0.00 0.00 0.00 00:12:05.868 [2024-11-20T05:22:20.381Z] =================================================================================================================== 00:12:05.868 [2024-11-20T05:22:20.381Z] Total : 6881.50 26.88 0.00 0.00 0.00 0.00 0.00 00:12:05.868 00:12:07.243 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:07.243 Nvme0n1 : 5.00 6851.40 26.76 0.00 0.00 0.00 0.00 0.00 00:12:07.243 [2024-11-20T05:22:21.756Z] =================================================================================================================== 00:12:07.243 [2024-11-20T05:22:21.756Z] Total : 6851.40 26.76 0.00 0.00 0.00 0.00 0.00 00:12:07.243 00:12:07.808 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:07.808 Nvme0n1 : 6.00 6747.83 26.36 0.00 0.00 0.00 0.00 0.00 00:12:07.808 [2024-11-20T05:22:22.321Z] =================================================================================================================== 00:12:07.808 [2024-11-20T05:22:22.322Z] Total : 6747.83 26.36 0.00 0.00 0.00 0.00 0.00 00:12:07.809 00:12:09.183 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:09.183 Nvme0n1 : 7.00 6727.29 26.28 0.00 0.00 0.00 0.00 0.00 00:12:09.183 [2024-11-20T05:22:23.696Z] =================================================================================================================== 00:12:09.183 [2024-11-20T05:22:23.696Z] Total : 6727.29 26.28 0.00 0.00 0.00 0.00 0.00 00:12:09.183 00:12:10.118 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:10.118 Nvme0n1 : 8.00 6711.88 26.22 0.00 0.00 0.00 0.00 0.00 00:12:10.118 [2024-11-20T05:22:24.631Z] =================================================================================================================== 00:12:10.118 [2024-11-20T05:22:24.631Z] Total : 6711.88 26.22 0.00 0.00 0.00 0.00 0.00 00:12:10.118 00:12:11.052 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:11.052 Nvme0n1 : 9.00 6657.56 26.01 0.00 0.00 0.00 0.00 0.00 00:12:11.052 [2024-11-20T05:22:25.565Z] =================================================================================================================== 00:12:11.052 [2024-11-20T05:22:25.565Z] Total : 6657.56 26.01 0.00 0.00 0.00 0.00 0.00 00:12:11.052 00:12:12.018 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:12.018 Nvme0n1 : 10.00 6652.20 25.99 0.00 0.00 0.00 0.00 0.00 00:12:12.018 [2024-11-20T05:22:26.531Z] =================================================================================================================== 00:12:12.018 [2024-11-20T05:22:26.531Z] Total : 6652.20 25.99 0.00 0.00 0.00 0.00 0.00 00:12:12.018 00:12:12.018 00:12:12.018 Latency(us) 00:12:12.018 [2024-11-20T05:22:26.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:12.018 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:12.018 Nvme0n1 : 10.02 6654.04 25.99 0.00 0.00 19230.61 5332.25 110577.11 00:12:12.018 [2024-11-20T05:22:26.531Z] =================================================================================================================== 00:12:12.018 [2024-11-20T05:22:26.531Z] Total : 6654.04 25.99 0.00 0.00 19230.61 5332.25 110577.11 00:12:12.018 { 00:12:12.018 "results": [ 00:12:12.018 { 00:12:12.018 "job": "Nvme0n1", 00:12:12.018 "core_mask": "0x2", 00:12:12.018 "workload": "randwrite", 00:12:12.018 "status": "finished", 00:12:12.018 "queue_depth": 128, 00:12:12.018 "io_size": 4096, 00:12:12.018 "runtime": 10.016465, 00:12:12.018 "iops": 6654.044116362409, 00:12:12.018 "mibps": 25.99235982954066, 00:12:12.018 "io_failed": 0, 00:12:12.018 "io_timeout": 0, 00:12:12.018 "avg_latency_us": 19230.61468785378, 00:12:12.018 "min_latency_us": 5332.2472727272725, 00:12:12.018 "max_latency_us": 110577.10545454545 00:12:12.018 } 00:12:12.018 ], 00:12:12.018 "core_count": 1 00:12:12.018 } 00:12:12.018 05:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63549 00:12:12.018 05:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 63549 ']' 00:12:12.019 05:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 63549 00:12:12.019 05:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:12:12.019 05:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:12.019 05:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63549 00:12:12.019 05:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:12.019 05:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:12.019 killing process with pid 63549 00:12:12.019 05:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63549' 00:12:12.019 Received shutdown signal, test time was about 10.000000 seconds 00:12:12.019 00:12:12.019 Latency(us) 00:12:12.019 [2024-11-20T05:22:26.532Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:12.019 [2024-11-20T05:22:26.532Z] =================================================================================================================== 00:12:12.019 [2024-11-20T05:22:26.532Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:12.019 05:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 63549 00:12:12.019 05:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 63549 00:12:12.277 05:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:12.534 05:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:12.790 05:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c81d96fa-60b6-4819-957b-409289409ef2 00:12:12.790 05:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:13.107 05:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:13.107 05:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:13.107 05:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:13.673 [2024-11-20 05:22:27.911559] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:13.673 05:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c81d96fa-60b6-4819-957b-409289409ef2 00:12:13.673 05:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:12:13.673 05:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c81d96fa-60b6-4819-957b-409289409ef2 00:12:13.673 05:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:13.673 05:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:13.673 05:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:13.673 05:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:13.673 05:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:13.673 05:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:13.673 05:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:13.673 05:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:13.673 05:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c81d96fa-60b6-4819-957b-409289409ef2 00:12:13.930 request: 00:12:13.930 { 00:12:13.930 "uuid": "c81d96fa-60b6-4819-957b-409289409ef2", 00:12:13.930 "method": "bdev_lvol_get_lvstores", 00:12:13.930 "req_id": 1 00:12:13.930 } 00:12:13.930 Got JSON-RPC error response 00:12:13.930 response: 00:12:13.930 { 00:12:13.930 "code": -19, 00:12:13.930 "message": "No such device" 00:12:13.930 } 00:12:13.930 05:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:12:13.930 05:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:13.930 05:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:13.930 05:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:13.930 05:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:14.187 aio_bdev 00:12:14.187 05:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bb5e995f-1a8b-4212-ba63-b1f7a22577aa 00:12:14.187 05:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=bb5e995f-1a8b-4212-ba63-b1f7a22577aa 00:12:14.187 05:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:14.187 05:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:12:14.187 05:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:14.187 05:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:14.187 05:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:14.445 05:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bb5e995f-1a8b-4212-ba63-b1f7a22577aa -t 2000 00:12:14.705 [ 00:12:14.705 { 00:12:14.705 "name": "bb5e995f-1a8b-4212-ba63-b1f7a22577aa", 00:12:14.705 "aliases": [ 00:12:14.705 "lvs/lvol" 00:12:14.705 ], 00:12:14.705 "product_name": "Logical Volume", 00:12:14.705 "block_size": 4096, 00:12:14.705 "num_blocks": 38912, 00:12:14.705 "uuid": "bb5e995f-1a8b-4212-ba63-b1f7a22577aa", 00:12:14.705 "assigned_rate_limits": { 00:12:14.705 "rw_ios_per_sec": 0, 00:12:14.705 "rw_mbytes_per_sec": 0, 00:12:14.705 "r_mbytes_per_sec": 0, 00:12:14.705 "w_mbytes_per_sec": 0 00:12:14.705 }, 00:12:14.705 "claimed": false, 00:12:14.705 "zoned": false, 00:12:14.705 "supported_io_types": { 00:12:14.705 "read": true, 00:12:14.705 "write": true, 00:12:14.705 "unmap": true, 00:12:14.705 "flush": false, 00:12:14.705 "reset": true, 00:12:14.705 "nvme_admin": false, 00:12:14.705 "nvme_io": false, 00:12:14.705 "nvme_io_md": false, 00:12:14.705 "write_zeroes": true, 00:12:14.705 "zcopy": false, 00:12:14.705 "get_zone_info": false, 00:12:14.705 "zone_management": false, 00:12:14.705 "zone_append": false, 00:12:14.705 "compare": false, 00:12:14.705 "compare_and_write": false, 00:12:14.705 "abort": false, 00:12:14.705 "seek_hole": true, 00:12:14.705 "seek_data": true, 00:12:14.705 "copy": false, 00:12:14.705 "nvme_iov_md": false 00:12:14.705 }, 00:12:14.705 "driver_specific": { 00:12:14.705 "lvol": { 00:12:14.705 "lvol_store_uuid": "c81d96fa-60b6-4819-957b-409289409ef2", 00:12:14.705 "base_bdev": "aio_bdev", 00:12:14.705 "thin_provision": false, 00:12:14.705 "num_allocated_clusters": 38, 00:12:14.705 "snapshot": false, 00:12:14.705 "clone": false, 00:12:14.705 "esnap_clone": false 00:12:14.705 } 00:12:14.705 } 00:12:14.705 } 00:12:14.705 ] 00:12:14.705 05:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:12:14.705 05:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:14.705 05:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c81d96fa-60b6-4819-957b-409289409ef2 00:12:15.271 05:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:15.271 05:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c81d96fa-60b6-4819-957b-409289409ef2 00:12:15.271 05:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:15.529 05:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:15.529 05:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete bb5e995f-1a8b-4212-ba63-b1f7a22577aa 00:12:15.787 05:22:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c81d96fa-60b6-4819-957b-409289409ef2 00:12:16.090 05:22:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:16.656 05:22:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:16.913 ************************************ 00:12:16.913 END TEST lvs_grow_clean 00:12:16.913 ************************************ 00:12:16.913 00:12:16.913 real 0m18.807s 00:12:16.913 user 0m17.524s 00:12:16.913 sys 0m2.551s 00:12:16.913 05:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:16.913 05:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:16.913 05:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:16.913 05:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:16.913 05:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:16.913 05:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:16.913 ************************************ 00:12:16.913 START TEST lvs_grow_dirty 00:12:16.913 ************************************ 00:12:16.913 05:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:12:16.913 05:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:16.913 05:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:16.913 05:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:16.913 05:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:16.913 05:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:16.913 05:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:16.913 05:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:16.913 05:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:16.913 05:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:17.479 05:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:17.479 05:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:17.738 05:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0d36e5d0-7cc6-44a7-8a10-7f013683104b 00:12:17.738 05:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d36e5d0-7cc6-44a7-8a10-7f013683104b 00:12:17.738 05:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:17.995 05:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:17.995 05:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:17.995 05:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0d36e5d0-7cc6-44a7-8a10-7f013683104b lvol 150 00:12:18.253 05:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=4201e144-252c-401d-8964-0ec8e517ce43 00:12:18.253 05:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:18.253 05:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:18.820 [2024-11-20 05:22:33.032159] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:18.820 [2024-11-20 05:22:33.032250] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:18.820 true 00:12:18.820 05:22:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d36e5d0-7cc6-44a7-8a10-7f013683104b 00:12:18.820 05:22:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:19.077 05:22:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:19.077 05:22:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:19.335 05:22:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4201e144-252c-401d-8964-0ec8e517ce43 00:12:19.594 05:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:12:20.160 [2024-11-20 05:22:34.380793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:20.160 05:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:20.419 05:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63830 00:12:20.419 05:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:20.419 05:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:20.419 05:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63830 /var/tmp/bdevperf.sock 00:12:20.419 05:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 63830 ']' 00:12:20.419 05:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:20.419 05:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:20.419 05:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:20.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:20.419 05:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:20.419 05:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:20.419 [2024-11-20 05:22:34.851479] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:12:20.419 [2024-11-20 05:22:34.851579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63830 ] 00:12:20.677 [2024-11-20 05:22:35.003065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.677 [2024-11-20 05:22:35.035776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.677 [2024-11-20 05:22:35.065328] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:20.940 05:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:20.940 05:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:12:20.940 05:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:21.198 Nvme0n1 00:12:21.198 05:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:21.457 [ 00:12:21.457 { 00:12:21.457 "name": "Nvme0n1", 00:12:21.457 "aliases": [ 00:12:21.457 "4201e144-252c-401d-8964-0ec8e517ce43" 00:12:21.457 ], 00:12:21.457 "product_name": "NVMe disk", 00:12:21.457 "block_size": 4096, 00:12:21.457 "num_blocks": 38912, 00:12:21.457 "uuid": "4201e144-252c-401d-8964-0ec8e517ce43", 00:12:21.457 "numa_id": -1, 00:12:21.457 "assigned_rate_limits": { 00:12:21.457 "rw_ios_per_sec": 0, 00:12:21.457 "rw_mbytes_per_sec": 0, 00:12:21.457 "r_mbytes_per_sec": 0, 00:12:21.457 "w_mbytes_per_sec": 0 00:12:21.457 }, 00:12:21.457 "claimed": false, 00:12:21.457 "zoned": false, 00:12:21.457 "supported_io_types": { 00:12:21.457 "read": true, 00:12:21.457 "write": true, 00:12:21.457 "unmap": true, 00:12:21.457 "flush": true, 00:12:21.457 "reset": true, 00:12:21.457 "nvme_admin": true, 00:12:21.457 "nvme_io": true, 00:12:21.457 "nvme_io_md": false, 00:12:21.457 "write_zeroes": true, 00:12:21.457 "zcopy": false, 00:12:21.457 "get_zone_info": false, 00:12:21.457 "zone_management": false, 00:12:21.457 "zone_append": false, 00:12:21.457 "compare": true, 00:12:21.457 "compare_and_write": true, 00:12:21.457 "abort": true, 00:12:21.457 "seek_hole": false, 00:12:21.457 "seek_data": false, 00:12:21.457 "copy": true, 00:12:21.457 "nvme_iov_md": false 00:12:21.457 }, 00:12:21.457 "memory_domains": [ 00:12:21.457 { 00:12:21.457 "dma_device_id": "system", 00:12:21.457 "dma_device_type": 1 00:12:21.457 } 00:12:21.457 ], 00:12:21.457 "driver_specific": { 00:12:21.457 "nvme": [ 00:12:21.457 { 00:12:21.457 "trid": { 00:12:21.457 "trtype": "TCP", 00:12:21.457 "adrfam": "IPv4", 00:12:21.457 "traddr": "10.0.0.3", 00:12:21.457 "trsvcid": "4420", 00:12:21.457 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:21.457 }, 00:12:21.457 "ctrlr_data": { 00:12:21.457 "cntlid": 1, 00:12:21.457 "vendor_id": "0x8086", 00:12:21.457 "model_number": "SPDK bdev Controller", 00:12:21.457 "serial_number": "SPDK0", 00:12:21.457 "firmware_revision": "25.01", 00:12:21.457 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:21.457 "oacs": { 00:12:21.457 "security": 0, 00:12:21.457 "format": 0, 00:12:21.457 "firmware": 0, 00:12:21.457 "ns_manage": 0 00:12:21.457 }, 00:12:21.457 "multi_ctrlr": true, 00:12:21.457 "ana_reporting": false 00:12:21.457 }, 00:12:21.457 "vs": { 00:12:21.457 "nvme_version": "1.3" 00:12:21.457 }, 00:12:21.457 "ns_data": { 00:12:21.457 "id": 1, 00:12:21.457 "can_share": true 00:12:21.457 } 00:12:21.457 } 00:12:21.457 ], 00:12:21.457 "mp_policy": "active_passive" 00:12:21.457 } 00:12:21.457 } 00:12:21.457 ] 00:12:21.457 05:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63846 00:12:21.457 05:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:21.457 05:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:21.715 Running I/O for 10 seconds... 00:12:22.649 Latency(us) 00:12:22.649 [2024-11-20T05:22:37.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.649 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:22.649 Nvme0n1 : 1.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:12:22.649 [2024-11-20T05:22:37.162Z] =================================================================================================================== 00:12:22.649 [2024-11-20T05:22:37.162Z] Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:12:22.649 00:12:23.584 05:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0d36e5d0-7cc6-44a7-8a10-7f013683104b 00:12:23.584 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:23.584 Nvme0n1 : 2.00 6747.00 26.36 0.00 0.00 0.00 0.00 0.00 00:12:23.584 [2024-11-20T05:22:38.097Z] =================================================================================================================== 00:12:23.584 [2024-11-20T05:22:38.097Z] Total : 6747.00 26.36 0.00 0.00 0.00 0.00 0.00 00:12:23.584 00:12:23.842 true 00:12:23.842 05:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d36e5d0-7cc6-44a7-8a10-7f013683104b 00:12:23.842 05:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:24.100 05:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:24.100 05:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:24.100 05:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63846 00:12:24.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:24.667 Nvme0n1 : 3.00 6784.00 26.50 0.00 0.00 0.00 0.00 0.00 00:12:24.667 [2024-11-20T05:22:39.180Z] =================================================================================================================== 00:12:24.667 [2024-11-20T05:22:39.180Z] Total : 6784.00 26.50 0.00 0.00 0.00 0.00 0.00 00:12:24.667 00:12:25.618 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:25.618 Nvme0n1 : 4.00 6739.00 26.32 0.00 0.00 0.00 0.00 0.00 00:12:25.618 [2024-11-20T05:22:40.131Z] =================================================================================================================== 00:12:25.618 [2024-11-20T05:22:40.131Z] Total : 6739.00 26.32 0.00 0.00 0.00 0.00 0.00 00:12:25.618 00:12:26.993 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:26.993 Nvme0n1 : 5.00 6737.40 26.32 0.00 0.00 0.00 0.00 0.00 00:12:26.993 [2024-11-20T05:22:41.506Z] =================================================================================================================== 00:12:26.993 [2024-11-20T05:22:41.506Z] Total : 6737.40 26.32 0.00 0.00 0.00 0.00 0.00 00:12:26.993 00:12:27.559 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:27.559 Nvme0n1 : 6.00 6700.00 26.17 0.00 0.00 0.00 0.00 0.00 00:12:27.559 [2024-11-20T05:22:42.072Z] =================================================================================================================== 00:12:27.559 [2024-11-20T05:22:42.072Z] Total : 6700.00 26.17 0.00 0.00 0.00 0.00 0.00 00:12:27.559 00:12:28.936 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:28.936 Nvme0n1 : 7.00 6541.14 25.55 0.00 0.00 0.00 0.00 0.00 00:12:28.936 [2024-11-20T05:22:43.449Z] =================================================================================================================== 00:12:28.936 [2024-11-20T05:22:43.449Z] Total : 6541.14 25.55 0.00 0.00 0.00 0.00 0.00 00:12:28.936 00:12:29.900 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:29.900 Nvme0n1 : 8.00 6533.12 25.52 0.00 0.00 0.00 0.00 0.00 00:12:29.900 [2024-11-20T05:22:44.413Z] =================================================================================================================== 00:12:29.900 [2024-11-20T05:22:44.413Z] Total : 6533.12 25.52 0.00 0.00 0.00 0.00 0.00 00:12:29.900 00:12:30.835 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:30.835 Nvme0n1 : 9.00 6470.44 25.28 0.00 0.00 0.00 0.00 0.00 00:12:30.835 [2024-11-20T05:22:45.348Z] =================================================================================================================== 00:12:30.835 [2024-11-20T05:22:45.348Z] Total : 6470.44 25.28 0.00 0.00 0.00 0.00 0.00 00:12:30.835 00:12:31.769 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:31.769 Nvme0n1 : 10.00 6483.80 25.33 0.00 0.00 0.00 0.00 0.00 00:12:31.769 [2024-11-20T05:22:46.282Z] =================================================================================================================== 00:12:31.769 [2024-11-20T05:22:46.282Z] Total : 6483.80 25.33 0.00 0.00 0.00 0.00 0.00 00:12:31.769 00:12:31.769 00:12:31.769 Latency(us) 00:12:31.769 [2024-11-20T05:22:46.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.769 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:31.769 Nvme0n1 : 10.00 6481.07 25.32 0.00 0.00 19739.54 8936.73 201135.94 00:12:31.769 [2024-11-20T05:22:46.282Z] =================================================================================================================== 00:12:31.769 [2024-11-20T05:22:46.282Z] Total : 6481.07 25.32 0.00 0.00 19739.54 8936.73 201135.94 00:12:31.769 { 00:12:31.770 "results": [ 00:12:31.770 { 00:12:31.770 "job": "Nvme0n1", 00:12:31.770 "core_mask": "0x2", 00:12:31.770 "workload": "randwrite", 00:12:31.770 "status": "finished", 00:12:31.770 "queue_depth": 128, 00:12:31.770 "io_size": 4096, 00:12:31.770 "runtime": 10.004372, 00:12:31.770 "iops": 6481.066477735933, 00:12:31.770 "mibps": 25.31666592865599, 00:12:31.770 "io_failed": 0, 00:12:31.770 "io_timeout": 0, 00:12:31.770 "avg_latency_us": 19739.54199618916, 00:12:31.770 "min_latency_us": 8936.727272727272, 00:12:31.770 "max_latency_us": 201135.9418181818 00:12:31.770 } 00:12:31.770 ], 00:12:31.770 "core_count": 1 00:12:31.770 } 00:12:31.770 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63830 00:12:31.770 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 63830 ']' 00:12:31.770 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 63830 00:12:31.770 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:12:31.770 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:31.770 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63830 00:12:31.770 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:31.770 killing process with pid 63830 00:12:31.770 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:31.770 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63830' 00:12:31.770 Received shutdown signal, test time was about 10.000000 seconds 00:12:31.770 00:12:31.770 Latency(us) 00:12:31.770 [2024-11-20T05:22:46.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.770 [2024-11-20T05:22:46.283Z] =================================================================================================================== 00:12:31.770 [2024-11-20T05:22:46.283Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:31.770 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 63830 00:12:31.770 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 63830 00:12:31.770 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:32.336 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:32.336 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:32.336 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d36e5d0-7cc6-44a7-8a10-7f013683104b 00:12:32.902 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:32.902 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:32.902 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63469 00:12:32.902 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63469 00:12:32.902 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63469 Killed "${NVMF_APP[@]}" "$@" 00:12:32.902 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:32.902 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:32.902 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:32.902 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:32.902 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:32.902 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63979 00:12:32.902 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63979 00:12:32.902 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:32.902 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 63979 ']' 00:12:32.902 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.902 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:32.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.902 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.902 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:32.902 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:32.902 [2024-11-20 05:22:47.216048] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:12:32.902 [2024-11-20 05:22:47.216130] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.902 [2024-11-20 05:22:47.362543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.902 [2024-11-20 05:22:47.395649] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.902 [2024-11-20 05:22:47.395697] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.902 [2024-11-20 05:22:47.395708] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.902 [2024-11-20 05:22:47.395716] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.902 [2024-11-20 05:22:47.395724] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.902 [2024-11-20 05:22:47.396045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.160 [2024-11-20 05:22:47.425764] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:33.160 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:33.160 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:12:33.160 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:33.160 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:33.160 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:33.160 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.160 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:33.443 [2024-11-20 05:22:47.776918] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:33.443 [2024-11-20 05:22:47.777179] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:33.443 [2024-11-20 05:22:47.777400] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:33.443 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:33.443 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 4201e144-252c-401d-8964-0ec8e517ce43 00:12:33.443 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=4201e144-252c-401d-8964-0ec8e517ce43 00:12:33.443 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:33.443 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:12:33.443 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:33.443 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:33.443 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:33.704 05:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4201e144-252c-401d-8964-0ec8e517ce43 -t 2000 00:12:33.963 [ 00:12:33.963 { 00:12:33.963 "name": "4201e144-252c-401d-8964-0ec8e517ce43", 00:12:33.963 "aliases": [ 00:12:33.963 "lvs/lvol" 00:12:33.963 ], 00:12:33.963 "product_name": "Logical Volume", 00:12:33.963 "block_size": 4096, 00:12:33.963 "num_blocks": 38912, 00:12:33.963 "uuid": "4201e144-252c-401d-8964-0ec8e517ce43", 00:12:33.963 "assigned_rate_limits": { 00:12:33.963 "rw_ios_per_sec": 0, 00:12:33.963 "rw_mbytes_per_sec": 0, 00:12:33.963 "r_mbytes_per_sec": 0, 00:12:33.963 "w_mbytes_per_sec": 0 00:12:33.963 }, 00:12:33.963 "claimed": false, 00:12:33.963 "zoned": false, 00:12:33.963 "supported_io_types": { 00:12:33.963 "read": true, 00:12:33.963 "write": true, 00:12:33.963 "unmap": true, 00:12:33.963 "flush": false, 00:12:33.963 "reset": true, 00:12:33.963 "nvme_admin": false, 00:12:33.963 "nvme_io": false, 00:12:33.963 "nvme_io_md": false, 00:12:33.963 "write_zeroes": true, 00:12:33.963 "zcopy": false, 00:12:33.963 "get_zone_info": false, 00:12:33.963 "zone_management": false, 00:12:33.963 "zone_append": false, 00:12:33.963 "compare": false, 00:12:33.963 "compare_and_write": false, 00:12:33.963 "abort": false, 00:12:33.963 "seek_hole": true, 00:12:33.963 "seek_data": true, 00:12:33.963 "copy": false, 00:12:33.963 "nvme_iov_md": false 00:12:33.963 }, 00:12:33.963 "driver_specific": { 00:12:33.963 "lvol": { 00:12:33.963 "lvol_store_uuid": "0d36e5d0-7cc6-44a7-8a10-7f013683104b", 00:12:33.963 "base_bdev": "aio_bdev", 00:12:33.963 "thin_provision": false, 00:12:33.963 "num_allocated_clusters": 38, 00:12:33.963 "snapshot": false, 00:12:33.963 "clone": false, 00:12:33.963 "esnap_clone": false 00:12:33.963 } 00:12:33.963 } 00:12:33.963 } 00:12:33.963 ] 00:12:33.963 05:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:12:33.963 05:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:33.963 05:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d36e5d0-7cc6-44a7-8a10-7f013683104b 00:12:34.222 05:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:34.222 05:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:34.222 05:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d36e5d0-7cc6-44a7-8a10-7f013683104b 00:12:34.791 05:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:34.791 05:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:35.049 [2024-11-20 05:22:49.358796] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:35.049 05:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d36e5d0-7cc6-44a7-8a10-7f013683104b 00:12:35.049 05:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:12:35.049 05:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d36e5d0-7cc6-44a7-8a10-7f013683104b 00:12:35.049 05:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:35.049 05:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.049 05:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:35.049 05:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.049 05:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:35.049 05:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.049 05:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:35.049 05:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:35.049 05:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d36e5d0-7cc6-44a7-8a10-7f013683104b 00:12:35.308 request: 00:12:35.308 { 00:12:35.308 "uuid": "0d36e5d0-7cc6-44a7-8a10-7f013683104b", 00:12:35.308 "method": "bdev_lvol_get_lvstores", 00:12:35.308 "req_id": 1 00:12:35.308 } 00:12:35.308 Got JSON-RPC error response 00:12:35.308 response: 00:12:35.308 { 00:12:35.308 "code": -19, 00:12:35.308 "message": "No such device" 00:12:35.308 } 00:12:35.308 05:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:12:35.308 05:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:35.308 05:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:35.308 05:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:35.308 05:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:35.875 aio_bdev 00:12:35.875 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4201e144-252c-401d-8964-0ec8e517ce43 00:12:35.875 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=4201e144-252c-401d-8964-0ec8e517ce43 00:12:35.875 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:35.875 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:12:35.875 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:35.875 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:35.875 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:36.133 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4201e144-252c-401d-8964-0ec8e517ce43 -t 2000 00:12:36.392 [ 00:12:36.392 { 00:12:36.392 "name": "4201e144-252c-401d-8964-0ec8e517ce43", 00:12:36.392 "aliases": [ 00:12:36.392 "lvs/lvol" 00:12:36.392 ], 00:12:36.392 "product_name": "Logical Volume", 00:12:36.392 "block_size": 4096, 00:12:36.392 "num_blocks": 38912, 00:12:36.392 "uuid": "4201e144-252c-401d-8964-0ec8e517ce43", 00:12:36.392 "assigned_rate_limits": { 00:12:36.392 "rw_ios_per_sec": 0, 00:12:36.392 "rw_mbytes_per_sec": 0, 00:12:36.392 "r_mbytes_per_sec": 0, 00:12:36.392 "w_mbytes_per_sec": 0 00:12:36.392 }, 00:12:36.392 "claimed": false, 00:12:36.392 "zoned": false, 00:12:36.392 "supported_io_types": { 00:12:36.392 "read": true, 00:12:36.392 "write": true, 00:12:36.392 "unmap": true, 00:12:36.392 "flush": false, 00:12:36.392 "reset": true, 00:12:36.392 "nvme_admin": false, 00:12:36.392 "nvme_io": false, 00:12:36.392 "nvme_io_md": false, 00:12:36.392 "write_zeroes": true, 00:12:36.392 "zcopy": false, 00:12:36.392 "get_zone_info": false, 00:12:36.392 "zone_management": false, 00:12:36.392 "zone_append": false, 00:12:36.392 "compare": false, 00:12:36.392 "compare_and_write": false, 00:12:36.392 "abort": false, 00:12:36.392 "seek_hole": true, 00:12:36.392 "seek_data": true, 00:12:36.392 "copy": false, 00:12:36.392 "nvme_iov_md": false 00:12:36.392 }, 00:12:36.392 "driver_specific": { 00:12:36.392 "lvol": { 00:12:36.392 "lvol_store_uuid": "0d36e5d0-7cc6-44a7-8a10-7f013683104b", 00:12:36.392 "base_bdev": "aio_bdev", 00:12:36.392 "thin_provision": false, 00:12:36.392 "num_allocated_clusters": 38, 00:12:36.392 "snapshot": false, 00:12:36.392 "clone": false, 00:12:36.392 "esnap_clone": false 00:12:36.392 } 00:12:36.392 } 00:12:36.392 } 00:12:36.392 ] 00:12:36.392 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:12:36.392 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d36e5d0-7cc6-44a7-8a10-7f013683104b 00:12:36.392 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:36.956 05:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:36.956 05:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d36e5d0-7cc6-44a7-8a10-7f013683104b 00:12:36.956 05:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:37.213 05:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:37.213 05:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4201e144-252c-401d-8964-0ec8e517ce43 00:12:37.472 05:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0d36e5d0-7cc6-44a7-8a10-7f013683104b 00:12:37.731 05:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:37.988 05:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:38.554 ************************************ 00:12:38.554 END TEST lvs_grow_dirty 00:12:38.554 ************************************ 00:12:38.554 00:12:38.554 real 0m21.619s 00:12:38.554 user 0m45.167s 00:12:38.554 sys 0m7.809s 00:12:38.554 05:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:38.554 05:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:38.554 05:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:38.554 05:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:12:38.554 05:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:12:38.554 05:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:12:38.554 05:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:38.554 05:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:12:38.554 05:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:12:38.554 05:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:12:38.554 05:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:38.554 nvmf_trace.0 00:12:38.554 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:12:38.554 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:38.554 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:38.554 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:39.121 rmmod nvme_tcp 00:12:39.121 rmmod nvme_fabrics 00:12:39.121 rmmod nvme_keyring 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63979 ']' 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63979 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 63979 ']' 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 63979 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63979 00:12:39.121 killing process with pid 63979 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63979' 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 63979 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 63979 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:39.121 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:39.379 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:39.379 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:39.379 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:39.379 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:39.379 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:39.379 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:39.379 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:39.379 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:39.379 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:39.379 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:39.379 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:39.379 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.379 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:39.379 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.379 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:12:39.379 ************************************ 00:12:39.379 END TEST nvmf_lvs_grow 00:12:39.379 ************************************ 00:12:39.379 00:12:39.379 real 0m42.575s 00:12:39.379 user 1m9.631s 00:12:39.379 sys 0m11.250s 00:12:39.379 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:39.379 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:39.379 05:22:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:39.379 05:22:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:39.379 05:22:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:39.379 05:22:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:39.379 ************************************ 00:12:39.379 START TEST nvmf_bdev_io_wait 00:12:39.379 ************************************ 00:12:39.379 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:39.638 * Looking for test storage... 00:12:39.638 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:39.638 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:39.638 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:12:39.638 05:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:39.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.638 --rc genhtml_branch_coverage=1 00:12:39.638 --rc genhtml_function_coverage=1 00:12:39.638 --rc genhtml_legend=1 00:12:39.638 --rc geninfo_all_blocks=1 00:12:39.638 --rc geninfo_unexecuted_blocks=1 00:12:39.638 00:12:39.638 ' 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:39.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.638 --rc genhtml_branch_coverage=1 00:12:39.638 --rc genhtml_function_coverage=1 00:12:39.638 --rc genhtml_legend=1 00:12:39.638 --rc geninfo_all_blocks=1 00:12:39.638 --rc geninfo_unexecuted_blocks=1 00:12:39.638 00:12:39.638 ' 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:39.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.638 --rc genhtml_branch_coverage=1 00:12:39.638 --rc genhtml_function_coverage=1 00:12:39.638 --rc genhtml_legend=1 00:12:39.638 --rc geninfo_all_blocks=1 00:12:39.638 --rc geninfo_unexecuted_blocks=1 00:12:39.638 00:12:39.638 ' 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:39.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.638 --rc genhtml_branch_coverage=1 00:12:39.638 --rc genhtml_function_coverage=1 00:12:39.638 --rc genhtml_legend=1 00:12:39.638 --rc geninfo_all_blocks=1 00:12:39.638 --rc geninfo_unexecuted_blocks=1 00:12:39.638 00:12:39.638 ' 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.638 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:39.639 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:39.639 Cannot find device "nvmf_init_br" 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:39.639 Cannot find device "nvmf_init_br2" 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:39.639 Cannot find device "nvmf_tgt_br" 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:39.639 Cannot find device "nvmf_tgt_br2" 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:39.639 Cannot find device "nvmf_init_br" 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:39.639 Cannot find device "nvmf_init_br2" 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:39.639 Cannot find device "nvmf_tgt_br" 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:39.639 Cannot find device "nvmf_tgt_br2" 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:39.639 Cannot find device "nvmf_br" 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:12:39.639 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:39.639 Cannot find device "nvmf_init_if" 00:12:39.898 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:12:39.898 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:39.898 Cannot find device "nvmf_init_if2" 00:12:39.898 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:12:39.898 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:39.898 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:39.898 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:12:39.898 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:39.898 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:39.898 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:12:39.898 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:39.898 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:39.898 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:39.898 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:39.898 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:39.898 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:39.898 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:39.898 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:39.898 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:39.898 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:39.898 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:39.898 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:39.898 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:39.898 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:39.898 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:39.898 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:39.898 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:39.898 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:39.898 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:39.898 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:39.899 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:39.899 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:39.899 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:39.899 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:39.899 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:39.899 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:39.899 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:39.899 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:39.899 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:39.899 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:39.899 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:39.899 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:39.899 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:39.899 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:39.899 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:12:39.899 00:12:39.899 --- 10.0.0.3 ping statistics --- 00:12:39.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.899 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:12:39.899 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:39.899 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:12:39.899 00:12:39.899 --- 10.0.0.4 ping statistics --- 00:12:39.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.899 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:12:39.899 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:39.899 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:39.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:39.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:12:39.899 00:12:39.899 --- 10.0.0.1 ping statistics --- 00:12:39.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.899 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:12:39.899 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:39.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:39.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:12:39.899 00:12:39.899 --- 10.0.0.2 ping statistics --- 00:12:39.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.899 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:12:39.899 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:39.899 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:12:39.899 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:39.899 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:39.899 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:39.899 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:39.899 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:39.899 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:39.899 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:40.157 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:40.157 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:40.157 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:40.157 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:40.157 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=64358 00:12:40.157 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:40.157 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 64358 00:12:40.157 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 64358 ']' 00:12:40.157 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.157 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:40.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.157 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.157 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:40.157 05:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:40.157 [2024-11-20 05:22:54.495372] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:12:40.157 [2024-11-20 05:22:54.495490] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.416 [2024-11-20 05:22:54.695781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:40.416 [2024-11-20 05:22:54.745204] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:40.416 [2024-11-20 05:22:54.745273] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:40.416 [2024-11-20 05:22:54.745289] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:40.416 [2024-11-20 05:22:54.745300] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:40.416 [2024-11-20 05:22:54.745311] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:40.416 [2024-11-20 05:22:54.746650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:40.416 [2024-11-20 05:22:54.746710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:40.416 [2024-11-20 05:22:54.746766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:40.416 [2024-11-20 05:22:54.746771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:41.350 [2024-11-20 05:22:55.664217] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:41.350 [2024-11-20 05:22:55.675105] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:41.350 Malloc0 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:41.350 [2024-11-20 05:22:55.723043] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64399 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64401 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:41.350 { 00:12:41.350 "params": { 00:12:41.350 "name": "Nvme$subsystem", 00:12:41.350 "trtype": "$TEST_TRANSPORT", 00:12:41.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:41.350 "adrfam": "ipv4", 00:12:41.350 "trsvcid": "$NVMF_PORT", 00:12:41.350 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:41.350 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:41.350 "hdgst": ${hdgst:-false}, 00:12:41.350 "ddgst": ${ddgst:-false} 00:12:41.350 }, 00:12:41.350 "method": "bdev_nvme_attach_controller" 00:12:41.350 } 00:12:41.350 EOF 00:12:41.350 )") 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64403 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:41.350 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:41.350 { 00:12:41.351 "params": { 00:12:41.351 "name": "Nvme$subsystem", 00:12:41.351 "trtype": "$TEST_TRANSPORT", 00:12:41.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:41.351 "adrfam": "ipv4", 00:12:41.351 "trsvcid": "$NVMF_PORT", 00:12:41.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:41.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:41.351 "hdgst": ${hdgst:-false}, 00:12:41.351 "ddgst": ${ddgst:-false} 00:12:41.351 }, 00:12:41.351 "method": "bdev_nvme_attach_controller" 00:12:41.351 } 00:12:41.351 EOF 00:12:41.351 )") 00:12:41.351 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:41.351 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64406 00:12:41.351 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:41.351 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:41.351 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:41.351 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:41.351 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:41.351 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:41.351 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:41.351 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:41.351 { 00:12:41.351 "params": { 00:12:41.351 "name": "Nvme$subsystem", 00:12:41.351 "trtype": "$TEST_TRANSPORT", 00:12:41.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:41.351 "adrfam": "ipv4", 00:12:41.351 "trsvcid": "$NVMF_PORT", 00:12:41.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:41.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:41.351 "hdgst": ${hdgst:-false}, 00:12:41.351 "ddgst": ${ddgst:-false} 00:12:41.351 }, 00:12:41.351 "method": "bdev_nvme_attach_controller" 00:12:41.351 } 00:12:41.351 EOF 00:12:41.351 )") 00:12:41.351 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:41.351 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:41.351 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:41.351 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:41.351 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:41.351 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:41.351 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:41.351 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:41.351 { 00:12:41.351 "params": { 00:12:41.351 "name": "Nvme$subsystem", 00:12:41.351 "trtype": "$TEST_TRANSPORT", 00:12:41.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:41.351 "adrfam": "ipv4", 00:12:41.351 "trsvcid": "$NVMF_PORT", 00:12:41.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:41.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:41.351 "hdgst": ${hdgst:-false}, 00:12:41.351 "ddgst": ${ddgst:-false} 00:12:41.351 }, 00:12:41.351 "method": "bdev_nvme_attach_controller" 00:12:41.351 } 00:12:41.351 EOF 00:12:41.351 )") 00:12:41.351 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:41.351 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:41.351 "params": { 00:12:41.351 "name": "Nvme1", 00:12:41.351 "trtype": "tcp", 00:12:41.351 "traddr": "10.0.0.3", 00:12:41.351 "adrfam": "ipv4", 00:12:41.351 "trsvcid": "4420", 00:12:41.351 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:41.351 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:41.351 "hdgst": false, 00:12:41.351 "ddgst": false 00:12:41.351 }, 00:12:41.351 "method": "bdev_nvme_attach_controller" 00:12:41.351 }' 00:12:41.351 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:41.351 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:41.351 "params": { 00:12:41.351 "name": "Nvme1", 00:12:41.351 "trtype": "tcp", 00:12:41.351 "traddr": "10.0.0.3", 00:12:41.351 "adrfam": "ipv4", 00:12:41.351 "trsvcid": "4420", 00:12:41.351 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:41.351 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:41.351 "hdgst": false, 00:12:41.351 "ddgst": false 00:12:41.351 }, 00:12:41.351 "method": "bdev_nvme_attach_controller" 00:12:41.351 }' 00:12:41.351 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:41.351 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:41.351 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:41.351 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:41.351 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:41.351 "params": { 00:12:41.351 "name": "Nvme1", 00:12:41.351 "trtype": "tcp", 00:12:41.351 "traddr": "10.0.0.3", 00:12:41.351 "adrfam": "ipv4", 00:12:41.351 "trsvcid": "4420", 00:12:41.351 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:41.351 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:41.351 "hdgst": false, 00:12:41.351 "ddgst": false 00:12:41.351 }, 00:12:41.351 "method": "bdev_nvme_attach_controller" 00:12:41.351 }' 00:12:41.351 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:41.351 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:41.351 "params": { 00:12:41.351 "name": "Nvme1", 00:12:41.351 "trtype": "tcp", 00:12:41.351 "traddr": "10.0.0.3", 00:12:41.351 "adrfam": "ipv4", 00:12:41.351 "trsvcid": "4420", 00:12:41.351 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:41.351 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:41.351 "hdgst": false, 00:12:41.351 "ddgst": false 00:12:41.351 }, 00:12:41.351 "method": "bdev_nvme_attach_controller" 00:12:41.351 }' 00:12:41.351 [2024-11-20 05:22:55.783086] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:12:41.351 [2024-11-20 05:22:55.783173] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:41.351 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64399 00:12:41.351 [2024-11-20 05:22:55.810714] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:12:41.352 [2024-11-20 05:22:55.811349] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:41.352 [2024-11-20 05:22:55.828795] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:12:41.352 [2024-11-20 05:22:55.828968] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:41.352 [2024-11-20 05:22:55.834192] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:12:41.352 [2024-11-20 05:22:55.835164] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:41.690 [2024-11-20 05:22:55.967831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.690 [2024-11-20 05:22:55.994504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:41.690 [2024-11-20 05:22:56.008123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:41.690 [2024-11-20 05:22:56.043774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.690 [2024-11-20 05:22:56.047994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.690 [2024-11-20 05:22:56.071084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:41.690 [2024-11-20 05:22:56.075043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:12:41.690 [2024-11-20 05:22:56.084098] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:41.690 [2024-11-20 05:22:56.088251] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:41.690 [2024-11-20 05:22:56.092846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.690 Running I/O for 1 seconds... 00:12:41.690 [2024-11-20 05:22:56.124215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:41.690 [2024-11-20 05:22:56.137324] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:41.949 Running I/O for 1 seconds... 00:12:41.949 Running I/O for 1 seconds... 00:12:41.949 Running I/O for 1 seconds... 00:12:42.883 5812.00 IOPS, 22.70 MiB/s 00:12:42.883 Latency(us) 00:12:42.883 [2024-11-20T05:22:57.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.883 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:42.883 Nvme1n1 : 1.02 5855.45 22.87 0.00 0.00 21695.50 8460.10 27286.81 00:12:42.883 [2024-11-20T05:22:57.396Z] =================================================================================================================== 00:12:42.883 [2024-11-20T05:22:57.396Z] Total : 5855.45 22.87 0.00 0.00 21695.50 8460.10 27286.81 00:12:42.883 5590.00 IOPS, 21.84 MiB/s [2024-11-20T05:22:57.396Z] 7576.00 IOPS, 29.59 MiB/s 00:12:42.883 Latency(us) 00:12:42.883 [2024-11-20T05:22:57.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.883 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:42.883 Nvme1n1 : 1.02 7653.29 29.90 0.00 0.00 16661.17 10187.87 29312.47 00:12:42.883 [2024-11-20T05:22:57.396Z] =================================================================================================================== 00:12:42.883 [2024-11-20T05:22:57.396Z] Total : 7653.29 29.90 0.00 0.00 16661.17 10187.87 29312.47 00:12:42.883 00:12:42.883 Latency(us) 00:12:42.883 [2024-11-20T05:22:57.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.883 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:42.883 Nvme1n1 : 1.02 5632.75 22.00 0.00 0.00 22534.48 5957.82 47424.23 00:12:42.883 [2024-11-20T05:22:57.396Z] =================================================================================================================== 00:12:42.883 [2024-11-20T05:22:57.396Z] Total : 5632.75 22.00 0.00 0.00 22534.48 5957.82 47424.23 00:12:42.883 118288.00 IOPS, 462.06 MiB/s 00:12:42.883 Latency(us) 00:12:42.883 [2024-11-20T05:22:57.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.883 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:42.883 Nvme1n1 : 1.00 117938.06 460.70 0.00 0.00 1078.91 521.31 8043.05 00:12:42.883 [2024-11-20T05:22:57.396Z] =================================================================================================================== 00:12:42.883 [2024-11-20T05:22:57.396Z] Total : 117938.06 460.70 0.00 0.00 1078.91 521.31 8043.05 00:12:43.142 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64401 00:12:43.142 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64403 00:12:43.142 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64406 00:12:43.142 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.142 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.142 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:43.142 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.142 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:43.142 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:43.142 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:43.142 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:12:43.142 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:43.142 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:12:43.142 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:43.142 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:43.142 rmmod nvme_tcp 00:12:43.142 rmmod nvme_fabrics 00:12:43.142 rmmod nvme_keyring 00:12:43.142 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:43.142 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:12:43.142 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:12:43.142 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 64358 ']' 00:12:43.142 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 64358 00:12:43.142 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 64358 ']' 00:12:43.142 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 64358 00:12:43.142 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:12:43.142 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:43.142 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64358 00:12:43.142 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:43.142 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:43.142 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64358' 00:12:43.142 killing process with pid 64358 00:12:43.142 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 64358 00:12:43.142 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 64358 00:12:43.400 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:43.400 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:43.400 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:43.400 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:12:43.400 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:12:43.400 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:43.400 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:12:43.400 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:43.400 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:43.400 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:43.400 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:43.400 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:43.400 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:43.400 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:43.400 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:43.400 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:43.400 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:43.400 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:43.400 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:43.400 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:43.400 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:43.400 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:43.658 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:43.658 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.658 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.658 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.658 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:12:43.658 ************************************ 00:12:43.658 END TEST nvmf_bdev_io_wait 00:12:43.658 ************************************ 00:12:43.658 00:12:43.658 real 0m4.082s 00:12:43.658 user 0m16.304s 00:12:43.658 sys 0m2.241s 00:12:43.658 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:43.658 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:43.658 05:22:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:43.658 05:22:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:43.658 05:22:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:43.658 05:22:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:43.658 ************************************ 00:12:43.658 START TEST nvmf_queue_depth 00:12:43.658 ************************************ 00:12:43.658 05:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:43.658 * Looking for test storage... 00:12:43.658 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:43.658 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:43.658 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:12:43.658 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:43.917 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:43.917 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:43.917 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:43.917 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:43.917 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:12:43.917 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:12:43.917 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:12:43.917 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:12:43.917 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:12:43.917 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:12:43.917 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:12:43.917 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:43.917 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:12:43.917 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:12:43.917 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:43.917 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:43.917 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:12:43.917 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:12:43.917 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:43.917 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:12:43.917 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:12:43.917 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:12:43.917 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:12:43.917 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:43.917 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:12:43.917 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:12:43.917 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:43.917 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:43.917 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:12:43.917 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:43.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.918 --rc genhtml_branch_coverage=1 00:12:43.918 --rc genhtml_function_coverage=1 00:12:43.918 --rc genhtml_legend=1 00:12:43.918 --rc geninfo_all_blocks=1 00:12:43.918 --rc geninfo_unexecuted_blocks=1 00:12:43.918 00:12:43.918 ' 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:43.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.918 --rc genhtml_branch_coverage=1 00:12:43.918 --rc genhtml_function_coverage=1 00:12:43.918 --rc genhtml_legend=1 00:12:43.918 --rc geninfo_all_blocks=1 00:12:43.918 --rc geninfo_unexecuted_blocks=1 00:12:43.918 00:12:43.918 ' 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:43.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.918 --rc genhtml_branch_coverage=1 00:12:43.918 --rc genhtml_function_coverage=1 00:12:43.918 --rc genhtml_legend=1 00:12:43.918 --rc geninfo_all_blocks=1 00:12:43.918 --rc geninfo_unexecuted_blocks=1 00:12:43.918 00:12:43.918 ' 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:43.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.918 --rc genhtml_branch_coverage=1 00:12:43.918 --rc genhtml_function_coverage=1 00:12:43.918 --rc genhtml_legend=1 00:12:43.918 --rc geninfo_all_blocks=1 00:12:43.918 --rc geninfo_unexecuted_blocks=1 00:12:43.918 00:12:43.918 ' 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:43.918 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:43.918 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:43.919 Cannot find device "nvmf_init_br" 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:43.919 Cannot find device "nvmf_init_br2" 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:43.919 Cannot find device "nvmf_tgt_br" 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:43.919 Cannot find device "nvmf_tgt_br2" 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:43.919 Cannot find device "nvmf_init_br" 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:43.919 Cannot find device "nvmf_init_br2" 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:43.919 Cannot find device "nvmf_tgt_br" 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:43.919 Cannot find device "nvmf_tgt_br2" 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:43.919 Cannot find device "nvmf_br" 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:43.919 Cannot find device "nvmf_init_if" 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:43.919 Cannot find device "nvmf_init_if2" 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:43.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:43.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:43.919 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:44.177 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:44.177 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:44.177 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:44.177 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:44.177 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:44.177 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:44.177 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:44.177 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:44.178 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:44.178 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:12:44.178 00:12:44.178 --- 10.0.0.3 ping statistics --- 00:12:44.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.178 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:44.178 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:44.178 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:12:44.178 00:12:44.178 --- 10.0.0.4 ping statistics --- 00:12:44.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.178 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:44.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:44.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:12:44.178 00:12:44.178 --- 10.0.0.1 ping statistics --- 00:12:44.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.178 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:44.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:44.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:12:44.178 00:12:44.178 --- 10.0.0.2 ping statistics --- 00:12:44.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.178 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64688 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64688 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 64688 ']' 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:44.178 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:44.178 [2024-11-20 05:22:58.655628] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:12:44.178 [2024-11-20 05:22:58.655718] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.437 [2024-11-20 05:22:58.808198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.437 [2024-11-20 05:22:58.847802] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.437 [2024-11-20 05:22:58.847866] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.437 [2024-11-20 05:22:58.847879] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:44.437 [2024-11-20 05:22:58.847890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:44.437 [2024-11-20 05:22:58.847898] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.437 [2024-11-20 05:22:58.848277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.437 [2024-11-20 05:22:58.881696] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:44.695 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:44.695 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:12:44.695 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:44.695 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:44.695 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:44.695 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.695 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:44.695 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.695 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:44.695 [2024-11-20 05:22:59.009426] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:44.695 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.695 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:44.695 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.695 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:44.695 Malloc0 00:12:44.695 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.696 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:44.696 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.696 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:44.696 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.696 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:44.696 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.696 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:44.696 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.696 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:44.696 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.696 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:44.696 [2024-11-20 05:22:59.056406] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:44.696 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.696 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64713 00:12:44.696 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:44.696 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:44.696 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64713 /var/tmp/bdevperf.sock 00:12:44.696 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 64713 ']' 00:12:44.696 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:44.696 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:44.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:44.696 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:44.696 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:44.696 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:44.696 [2024-11-20 05:22:59.116070] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:12:44.696 [2024-11-20 05:22:59.116156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64713 ] 00:12:44.953 [2024-11-20 05:22:59.263963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.953 [2024-11-20 05:22:59.321429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.953 [2024-11-20 05:22:59.354713] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:44.953 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:44.953 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:12:44.953 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:44.953 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.953 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:45.211 NVMe0n1 00:12:45.211 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.211 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:45.211 Running I/O for 10 seconds... 00:12:47.519 6144.00 IOPS, 24.00 MiB/s [2024-11-20T05:23:02.967Z] 6268.50 IOPS, 24.49 MiB/s [2024-11-20T05:23:03.901Z] 6507.67 IOPS, 25.42 MiB/s [2024-11-20T05:23:04.863Z] 6670.25 IOPS, 26.06 MiB/s [2024-11-20T05:23:05.821Z] 6773.80 IOPS, 26.46 MiB/s [2024-11-20T05:23:06.755Z] 6937.50 IOPS, 27.10 MiB/s [2024-11-20T05:23:07.690Z] 6936.29 IOPS, 27.09 MiB/s [2024-11-20T05:23:09.063Z] 6925.75 IOPS, 27.05 MiB/s [2024-11-20T05:23:09.996Z] 6962.56 IOPS, 27.20 MiB/s [2024-11-20T05:23:09.996Z] 7010.40 IOPS, 27.38 MiB/s 00:12:55.483 Latency(us) 00:12:55.483 [2024-11-20T05:23:09.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.483 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:55.483 Verification LBA range: start 0x0 length 0x4000 00:12:55.483 NVMe0n1 : 10.07 7051.98 27.55 0.00 0.00 144409.29 11081.54 182070.92 00:12:55.483 [2024-11-20T05:23:09.996Z] =================================================================================================================== 00:12:55.483 [2024-11-20T05:23:09.996Z] Total : 7051.98 27.55 0.00 0.00 144409.29 11081.54 182070.92 00:12:55.483 { 00:12:55.483 "results": [ 00:12:55.483 { 00:12:55.483 "job": "NVMe0n1", 00:12:55.483 "core_mask": "0x1", 00:12:55.483 "workload": "verify", 00:12:55.483 "status": "finished", 00:12:55.483 "verify_range": { 00:12:55.483 "start": 0, 00:12:55.483 "length": 16384 00:12:55.483 }, 00:12:55.483 "queue_depth": 1024, 00:12:55.483 "io_size": 4096, 00:12:55.483 "runtime": 10.072061, 00:12:55.483 "iops": 7051.982707412118, 00:12:55.483 "mibps": 27.546807450828585, 00:12:55.483 "io_failed": 0, 00:12:55.483 "io_timeout": 0, 00:12:55.483 "avg_latency_us": 144409.29334050082, 00:12:55.483 "min_latency_us": 11081.541818181819, 00:12:55.483 "max_latency_us": 182070.92363636364 00:12:55.483 } 00:12:55.483 ], 00:12:55.483 "core_count": 1 00:12:55.483 } 00:12:55.484 05:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64713 00:12:55.484 05:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 64713 ']' 00:12:55.484 05:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 64713 00:12:55.484 05:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:12:55.484 05:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:55.484 05:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64713 00:12:55.484 killing process with pid 64713 00:12:55.484 Received shutdown signal, test time was about 10.000000 seconds 00:12:55.484 00:12:55.484 Latency(us) 00:12:55.484 [2024-11-20T05:23:09.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.484 [2024-11-20T05:23:09.997Z] =================================================================================================================== 00:12:55.484 [2024-11-20T05:23:09.997Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:55.484 05:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:55.484 05:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:55.484 05:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64713' 00:12:55.484 05:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 64713 00:12:55.484 05:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 64713 00:12:55.484 05:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:55.484 05:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:55.484 05:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:55.484 05:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:12:55.484 05:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:55.484 05:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:12:55.484 05:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:55.484 05:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:55.484 rmmod nvme_tcp 00:12:55.484 rmmod nvme_fabrics 00:12:55.484 rmmod nvme_keyring 00:12:55.741 05:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:55.741 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:12:55.741 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:12:55.741 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64688 ']' 00:12:55.741 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64688 00:12:55.741 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 64688 ']' 00:12:55.742 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 64688 00:12:55.742 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:12:55.742 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:55.742 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64688 00:12:55.742 killing process with pid 64688 00:12:55.742 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:55.742 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:55.742 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64688' 00:12:55.742 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 64688 00:12:55.742 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 64688 00:12:55.742 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:55.742 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:55.742 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:55.742 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:12:55.742 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:12:55.742 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:55.742 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:12:55.742 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:55.742 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:55.742 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:55.742 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:55.742 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:55.742 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:56.000 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:56.000 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:56.000 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:56.000 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:56.000 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:56.000 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:56.000 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:56.000 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:56.000 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:56.000 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:56.000 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.000 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:56.000 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.000 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:12:56.000 00:12:56.000 real 0m12.423s 00:12:56.000 user 0m21.326s 00:12:56.001 sys 0m2.098s 00:12:56.001 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:56.001 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:56.001 ************************************ 00:12:56.001 END TEST nvmf_queue_depth 00:12:56.001 ************************************ 00:12:56.001 05:23:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:56.001 05:23:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:56.001 05:23:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:56.001 05:23:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:56.001 ************************************ 00:12:56.001 START TEST nvmf_target_multipath 00:12:56.001 ************************************ 00:12:56.001 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:56.261 * Looking for test storage... 00:12:56.261 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:56.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.261 --rc genhtml_branch_coverage=1 00:12:56.261 --rc genhtml_function_coverage=1 00:12:56.261 --rc genhtml_legend=1 00:12:56.261 --rc geninfo_all_blocks=1 00:12:56.261 --rc geninfo_unexecuted_blocks=1 00:12:56.261 00:12:56.261 ' 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:56.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.261 --rc genhtml_branch_coverage=1 00:12:56.261 --rc genhtml_function_coverage=1 00:12:56.261 --rc genhtml_legend=1 00:12:56.261 --rc geninfo_all_blocks=1 00:12:56.261 --rc geninfo_unexecuted_blocks=1 00:12:56.261 00:12:56.261 ' 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:56.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.261 --rc genhtml_branch_coverage=1 00:12:56.261 --rc genhtml_function_coverage=1 00:12:56.261 --rc genhtml_legend=1 00:12:56.261 --rc geninfo_all_blocks=1 00:12:56.261 --rc geninfo_unexecuted_blocks=1 00:12:56.261 00:12:56.261 ' 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:56.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.261 --rc genhtml_branch_coverage=1 00:12:56.261 --rc genhtml_function_coverage=1 00:12:56.261 --rc genhtml_legend=1 00:12:56.261 --rc geninfo_all_blocks=1 00:12:56.261 --rc geninfo_unexecuted_blocks=1 00:12:56.261 00:12:56.261 ' 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.261 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:56.262 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:56.262 Cannot find device "nvmf_init_br" 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:56.262 Cannot find device "nvmf_init_br2" 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:56.262 Cannot find device "nvmf_tgt_br" 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:56.262 Cannot find device "nvmf_tgt_br2" 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:56.262 Cannot find device "nvmf_init_br" 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:56.262 Cannot find device "nvmf_init_br2" 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:56.262 Cannot find device "nvmf_tgt_br" 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:56.262 Cannot find device "nvmf_tgt_br2" 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:56.262 Cannot find device "nvmf_br" 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:56.262 Cannot find device "nvmf_init_if" 00:12:56.262 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:56.520 Cannot find device "nvmf_init_if2" 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:56.520 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:56.520 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:56.520 05:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:56.520 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:56.520 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:12:56.520 00:12:56.520 --- 10.0.0.3 ping statistics --- 00:12:56.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.520 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:12:56.520 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:56.521 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:56.521 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:12:56.521 00:12:56.521 --- 10.0.0.4 ping statistics --- 00:12:56.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.521 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:12:56.521 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:56.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:56.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:12:56.521 00:12:56.521 --- 10.0.0.1 ping statistics --- 00:12:56.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.521 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:12:56.521 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:56.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:56.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:12:56.521 00:12:56.521 --- 10.0.0.2 ping statistics --- 00:12:56.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.521 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:12:56.521 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:56.521 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:12:56.521 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:56.521 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:56.521 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:56.521 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:56.521 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:56.521 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:56.521 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:56.779 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:12:56.779 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:12:56.779 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:12:56.779 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:56.779 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:56.779 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:56.779 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=65081 00:12:56.779 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:56.779 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 65081 00:12:56.779 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@833 -- # '[' -z 65081 ']' 00:12:56.779 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.779 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:56.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.779 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.779 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:56.779 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:56.779 [2024-11-20 05:23:11.104451] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:12:56.779 [2024-11-20 05:23:11.104551] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:56.779 [2024-11-20 05:23:11.251426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:56.779 [2024-11-20 05:23:11.286883] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:56.779 [2024-11-20 05:23:11.286962] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:56.779 [2024-11-20 05:23:11.286975] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:56.779 [2024-11-20 05:23:11.286984] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:56.779 [2024-11-20 05:23:11.286992] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:56.779 [2024-11-20 05:23:11.287806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.779 [2024-11-20 05:23:11.287864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:56.779 [2024-11-20 05:23:11.287956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:56.779 [2024-11-20 05:23:11.287962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.037 [2024-11-20 05:23:11.321393] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:57.037 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:57.037 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@866 -- # return 0 00:12:57.037 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:57.037 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:57.037 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:57.037 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.037 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:57.295 [2024-11-20 05:23:11.727446] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:57.295 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:12:57.861 Malloc0 00:12:57.861 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:12:58.119 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:58.758 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:58.758 [2024-11-20 05:23:13.253040] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:59.017 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:12:59.277 [2024-11-20 05:23:13.633602] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:12:59.277 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid=4bd82fc4-6e19-4d22-95c5-23a13095cd93 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:12:59.277 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid=4bd82fc4-6e19-4d22-95c5-23a13095cd93 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:12:59.536 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:12:59.536 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # local i=0 00:12:59.536 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.536 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:59.536 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # sleep 2 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # return 0 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=65168 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:13:01.437 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:13:01.695 [global] 00:13:01.696 thread=1 00:13:01.696 invalidate=1 00:13:01.696 rw=randrw 00:13:01.696 time_based=1 00:13:01.696 runtime=6 00:13:01.696 ioengine=libaio 00:13:01.696 direct=1 00:13:01.696 bs=4096 00:13:01.696 iodepth=128 00:13:01.696 norandommap=0 00:13:01.696 numjobs=1 00:13:01.696 00:13:01.696 verify_dump=1 00:13:01.696 verify_backlog=512 00:13:01.696 verify_state_save=0 00:13:01.696 do_verify=1 00:13:01.696 verify=crc32c-intel 00:13:01.696 [job0] 00:13:01.696 filename=/dev/nvme0n1 00:13:01.696 Could not set queue depth (nvme0n1) 00:13:01.696 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:01.696 fio-3.35 00:13:01.696 Starting 1 thread 00:13:02.629 05:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:13:02.887 05:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:13:03.452 05:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:13:03.452 05:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:13:03.452 05:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:03.452 05:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:03.452 05:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:03.453 05:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:03.453 05:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:13:03.453 05:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:13:03.453 05:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:03.453 05:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:03.453 05:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:03.453 05:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:03.453 05:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:13:03.453 05:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:13:04.019 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:13:04.019 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:13:04.019 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:04.019 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:04.019 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:04.019 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:04.019 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:13:04.019 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:13:04.019 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:04.019 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:04.019 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:04.019 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:04.019 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 65168 00:13:08.202 00:13:08.202 job0: (groupid=0, jobs=1): err= 0: pid=65195: Wed Nov 20 05:23:22 2024 00:13:08.202 read: IOPS=9238, BW=36.1MiB/s (37.8MB/s)(217MiB/6001msec) 00:13:08.202 slat (usec): min=2, max=10165, avg=63.50, stdev=281.45 00:13:08.202 clat (usec): min=471, max=43172, avg=9506.74, stdev=2863.60 00:13:08.202 lat (usec): min=487, max=43185, avg=9570.24, stdev=2879.20 00:13:08.202 clat percentiles (usec): 00:13:08.202 | 1.00th=[ 4621], 5.00th=[ 6652], 10.00th=[ 7570], 20.00th=[ 8094], 00:13:08.203 | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9372], 00:13:08.203 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[12256], 95.00th=[13698], 00:13:08.203 | 99.00th=[18482], 99.50th=[27657], 99.90th=[37487], 99.95th=[40109], 00:13:08.203 | 99.99th=[43254] 00:13:08.203 bw ( KiB/s): min= 7376, max=25064, per=50.21%, avg=18554.64, stdev=6037.45, samples=11 00:13:08.203 iops : min= 1844, max= 6266, avg=4638.64, stdev=1509.35, samples=11 00:13:08.203 write: IOPS=5421, BW=21.2MiB/s (22.2MB/s)(111MiB/5240msec); 0 zone resets 00:13:08.203 slat (usec): min=4, max=5767, avg=74.67, stdev=197.68 00:13:08.203 clat (usec): min=337, max=36995, avg=8222.14, stdev=2353.52 00:13:08.203 lat (usec): min=370, max=37024, avg=8296.80, stdev=2369.16 00:13:08.203 clat percentiles (usec): 00:13:08.203 | 1.00th=[ 3687], 5.00th=[ 4752], 10.00th=[ 5932], 20.00th=[ 7177], 00:13:08.203 | 30.00th=[ 7504], 40.00th=[ 7832], 50.00th=[ 8094], 60.00th=[ 8356], 00:13:08.203 | 70.00th=[ 8717], 80.00th=[ 9110], 90.00th=[10028], 95.00th=[11338], 00:13:08.203 | 99.00th=[16057], 99.50th=[21103], 99.90th=[31851], 99.95th=[33162], 00:13:08.203 | 99.99th=[36963] 00:13:08.203 bw ( KiB/s): min= 7464, max=24576, per=85.53%, avg=18548.18, stdev=5849.40, samples=11 00:13:08.203 iops : min= 1866, max= 6144, avg=4637.00, stdev=1462.33, samples=11 00:13:08.203 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:13:08.203 lat (msec) : 2=0.03%, 4=0.83%, 10=78.24%, 20=20.11%, 50=0.77% 00:13:08.203 cpu : usr=5.08%, sys=21.78%, ctx=4741, majf=0, minf=90 00:13:08.203 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:13:08.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:08.203 issued rwts: total=55438,28407,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:08.203 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:08.203 00:13:08.203 Run status group 0 (all jobs): 00:13:08.203 READ: bw=36.1MiB/s (37.8MB/s), 36.1MiB/s-36.1MiB/s (37.8MB/s-37.8MB/s), io=217MiB (227MB), run=6001-6001msec 00:13:08.203 WRITE: bw=21.2MiB/s (22.2MB/s), 21.2MiB/s-21.2MiB/s (22.2MB/s-22.2MB/s), io=111MiB (116MB), run=5240-5240msec 00:13:08.203 00:13:08.203 Disk stats (read/write): 00:13:08.203 nvme0n1: ios=54811/27805, merge=0/0, ticks=498737/213525, in_queue=712262, util=98.62% 00:13:08.203 05:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:13:08.203 05:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:13:08.769 05:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:13:08.769 05:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:13:08.769 05:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:08.769 05:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:08.769 05:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:08.769 05:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:08.769 05:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:13:08.769 05:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:13:08.769 05:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:08.769 05:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:08.769 05:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:08.769 05:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:08.769 05:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:13:08.769 05:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=65273 00:13:08.769 05:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:13:08.769 05:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:13:08.769 [global] 00:13:08.769 thread=1 00:13:08.769 invalidate=1 00:13:08.769 rw=randrw 00:13:08.769 time_based=1 00:13:08.769 runtime=6 00:13:08.769 ioengine=libaio 00:13:08.769 direct=1 00:13:08.769 bs=4096 00:13:08.769 iodepth=128 00:13:08.769 norandommap=0 00:13:08.769 numjobs=1 00:13:08.769 00:13:08.769 verify_dump=1 00:13:08.769 verify_backlog=512 00:13:08.769 verify_state_save=0 00:13:08.769 do_verify=1 00:13:08.769 verify=crc32c-intel 00:13:08.769 [job0] 00:13:08.769 filename=/dev/nvme0n1 00:13:08.769 Could not set queue depth (nvme0n1) 00:13:08.769 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:08.769 fio-3.35 00:13:08.769 Starting 1 thread 00:13:09.705 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:13:10.310 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:13:10.568 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:13:10.568 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:13:10.568 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:10.568 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:10.568 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:10.568 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:10.568 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:13:10.568 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:13:10.568 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:10.568 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:10.568 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:10.568 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:10.568 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:13:10.826 05:23:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:13:11.392 05:23:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:13:11.392 05:23:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:13:11.392 05:23:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:11.392 05:23:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:11.392 05:23:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:11.392 05:23:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:11.392 05:23:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:13:11.392 05:23:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:13:11.392 05:23:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:11.392 05:23:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:11.392 05:23:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:11.392 05:23:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:11.392 05:23:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 65273 00:13:15.575 00:13:15.575 job0: (groupid=0, jobs=1): err= 0: pid=65303: Wed Nov 20 05:23:29 2024 00:13:15.575 read: IOPS=10.9k, BW=42.7MiB/s (44.7MB/s)(256MiB/6007msec) 00:13:15.575 slat (usec): min=3, max=7591, avg=45.57, stdev=209.53 00:13:15.575 clat (usec): min=254, max=43066, avg=8019.02, stdev=3344.16 00:13:15.575 lat (usec): min=267, max=43102, avg=8064.59, stdev=3364.80 00:13:15.575 clat percentiles (usec): 00:13:15.575 | 1.00th=[ 1106], 5.00th=[ 3982], 10.00th=[ 4752], 20.00th=[ 5997], 00:13:15.575 | 30.00th=[ 7111], 40.00th=[ 7767], 50.00th=[ 8160], 60.00th=[ 8225], 00:13:15.575 | 70.00th=[ 8586], 80.00th=[ 9110], 90.00th=[10290], 95.00th=[12256], 00:13:15.575 | 99.00th=[21365], 99.50th=[30016], 99.90th=[37487], 99.95th=[39060], 00:13:15.575 | 99.99th=[39584] 00:13:15.575 bw ( KiB/s): min=11704, max=32992, per=52.77%, avg=23062.67, stdev=7486.99, samples=12 00:13:15.575 iops : min= 2926, max= 8248, avg=5765.67, stdev=1871.75, samples=12 00:13:15.575 write: IOPS=6510, BW=25.4MiB/s (26.7MB/s)(135MiB/5320msec); 0 zone resets 00:13:15.575 slat (usec): min=4, max=6507, avg=59.12, stdev=154.87 00:13:15.575 clat (usec): min=187, max=38898, avg=6823.34, stdev=3580.26 00:13:15.575 lat (usec): min=221, max=38926, avg=6882.47, stdev=3600.67 00:13:15.575 clat percentiles (usec): 00:13:15.575 | 1.00th=[ 627], 5.00th=[ 2999], 10.00th=[ 3654], 20.00th=[ 4359], 00:13:15.575 | 30.00th=[ 5014], 40.00th=[ 6128], 50.00th=[ 7046], 60.00th=[ 7439], 00:13:15.575 | 70.00th=[ 7767], 80.00th=[ 8094], 90.00th=[ 8848], 95.00th=[10421], 00:13:15.575 | 99.00th=[25035], 99.50th=[30540], 99.90th=[34341], 99.95th=[34866], 00:13:15.575 | 99.99th=[35914] 00:13:15.575 bw ( KiB/s): min=12288, max=32880, per=88.51%, avg=23052.00, stdev=7390.95, samples=12 00:13:15.575 iops : min= 3072, max= 8220, avg=5763.00, stdev=1847.74, samples=12 00:13:15.575 lat (usec) : 250=0.01%, 500=0.30%, 750=0.53%, 1000=0.37% 00:13:15.575 lat (msec) : 2=0.87%, 4=6.29%, 10=82.33%, 20=8.12%, 50=1.18% 00:13:15.575 cpu : usr=6.19%, sys=26.19%, ctx=6347, majf=0, minf=151 00:13:15.575 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:13:15.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.575 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:15.575 issued rwts: total=65625,34638,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:15.575 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:15.575 00:13:15.575 Run status group 0 (all jobs): 00:13:15.575 READ: bw=42.7MiB/s (44.7MB/s), 42.7MiB/s-42.7MiB/s (44.7MB/s-44.7MB/s), io=256MiB (269MB), run=6007-6007msec 00:13:15.575 WRITE: bw=25.4MiB/s (26.7MB/s), 25.4MiB/s-25.4MiB/s (26.7MB/s-26.7MB/s), io=135MiB (142MB), run=5320-5320msec 00:13:15.575 00:13:15.575 Disk stats (read/write): 00:13:15.575 nvme0n1: ios=64730/34011, merge=0/0, ticks=493833/213569, in_queue=707402, util=98.62% 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:15.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1221 -- # local i=0 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1233 -- # return 0 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:15.575 rmmod nvme_tcp 00:13:15.575 rmmod nvme_fabrics 00:13:15.575 rmmod nvme_keyring 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 65081 ']' 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 65081 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@952 -- # '[' -z 65081 ']' 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # kill -0 65081 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@957 -- # uname 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65081 00:13:15.575 killing process with pid 65081 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65081' 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@971 -- # kill 65081 00:13:15.575 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@976 -- # wait 65081 00:13:15.832 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:15.832 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:15.832 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:15.832 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:13:15.832 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:13:15.832 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:15.832 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:13:15.832 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:15.832 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:15.832 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:15.832 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:15.832 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:15.832 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:15.832 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:15.832 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:15.832 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:15.832 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:15.832 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:15.833 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:15.833 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:15.833 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:15.833 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:15.833 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:15.833 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.833 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.833 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.833 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:13:15.833 ************************************ 00:13:15.833 END TEST nvmf_target_multipath 00:13:15.833 ************************************ 00:13:15.833 00:13:15.833 real 0m19.849s 00:13:15.833 user 1m14.908s 00:13:15.833 sys 0m10.254s 00:13:15.833 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:15.833 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:15.833 05:23:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:15.833 05:23:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:15.833 05:23:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:15.833 05:23:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:16.091 ************************************ 00:13:16.091 START TEST nvmf_zcopy 00:13:16.091 ************************************ 00:13:16.091 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:16.091 * Looking for test storage... 00:13:16.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:16.091 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:16.091 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:13:16.091 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:16.091 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:16.091 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:16.091 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:16.091 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:16.091 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:13:16.091 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:13:16.091 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:13:16.091 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:13:16.091 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:13:16.091 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:13:16.091 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:13:16.091 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:16.091 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:13:16.091 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:13:16.091 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:16.091 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:16.091 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:13:16.091 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:13:16.091 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:16.091 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:13:16.091 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:13:16.091 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:13:16.091 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:13:16.091 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:16.091 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:13:16.091 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:16.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.092 --rc genhtml_branch_coverage=1 00:13:16.092 --rc genhtml_function_coverage=1 00:13:16.092 --rc genhtml_legend=1 00:13:16.092 --rc geninfo_all_blocks=1 00:13:16.092 --rc geninfo_unexecuted_blocks=1 00:13:16.092 00:13:16.092 ' 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:16.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.092 --rc genhtml_branch_coverage=1 00:13:16.092 --rc genhtml_function_coverage=1 00:13:16.092 --rc genhtml_legend=1 00:13:16.092 --rc geninfo_all_blocks=1 00:13:16.092 --rc geninfo_unexecuted_blocks=1 00:13:16.092 00:13:16.092 ' 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:16.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.092 --rc genhtml_branch_coverage=1 00:13:16.092 --rc genhtml_function_coverage=1 00:13:16.092 --rc genhtml_legend=1 00:13:16.092 --rc geninfo_all_blocks=1 00:13:16.092 --rc geninfo_unexecuted_blocks=1 00:13:16.092 00:13:16.092 ' 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:16.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.092 --rc genhtml_branch_coverage=1 00:13:16.092 --rc genhtml_function_coverage=1 00:13:16.092 --rc genhtml_legend=1 00:13:16.092 --rc geninfo_all_blocks=1 00:13:16.092 --rc geninfo_unexecuted_blocks=1 00:13:16.092 00:13:16.092 ' 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:16.092 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:16.092 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:16.093 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:16.093 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:16.093 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:16.093 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:16.093 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:16.093 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:16.093 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:16.093 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:16.093 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:16.093 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:16.093 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:16.093 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:16.093 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:16.093 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:16.093 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:16.093 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:16.093 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:16.093 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:16.093 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:16.093 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:16.093 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:16.093 Cannot find device "nvmf_init_br" 00:13:16.093 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:13:16.093 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:16.093 Cannot find device "nvmf_init_br2" 00:13:16.093 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:13:16.093 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:16.093 Cannot find device "nvmf_tgt_br" 00:13:16.093 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:13:16.093 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:16.351 Cannot find device "nvmf_tgt_br2" 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:16.351 Cannot find device "nvmf_init_br" 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:16.351 Cannot find device "nvmf_init_br2" 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:16.351 Cannot find device "nvmf_tgt_br" 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:16.351 Cannot find device "nvmf_tgt_br2" 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:16.351 Cannot find device "nvmf_br" 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:16.351 Cannot find device "nvmf_init_if" 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:16.351 Cannot find device "nvmf_init_if2" 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:16.351 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:16.351 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:16.351 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:16.609 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:16.609 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:16.609 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:16.609 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:16.609 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:16.609 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:16.609 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:16.609 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:16.609 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:16.609 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:16.609 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:16.609 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:16.609 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:16.609 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:16.610 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:16.610 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:16.610 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:16.610 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:16.610 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:16.610 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:16.610 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:13:16.610 00:13:16.610 --- 10.0.0.3 ping statistics --- 00:13:16.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.610 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:13:16.610 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:16.610 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:16.610 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.097 ms 00:13:16.610 00:13:16.610 --- 10.0.0.4 ping statistics --- 00:13:16.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.610 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:13:16.610 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:16.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:16.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:13:16.610 00:13:16.610 --- 10.0.0.1 ping statistics --- 00:13:16.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.610 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:13:16.610 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:16.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:16.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:13:16.610 00:13:16.610 --- 10.0.0.2 ping statistics --- 00:13:16.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.610 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:13:16.610 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:16.610 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:13:16.610 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:16.610 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:16.610 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:16.610 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:16.610 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:16.610 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:16.610 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:16.610 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:16.610 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:16.610 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:16.610 05:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:16.610 05:23:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65601 00:13:16.610 05:23:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:16.610 05:23:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65601 00:13:16.610 05:23:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 65601 ']' 00:13:16.610 05:23:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.610 05:23:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:16.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.610 05:23:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.610 05:23:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:16.610 05:23:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:16.610 [2024-11-20 05:23:31.073445] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:13:16.610 [2024-11-20 05:23:31.073548] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.868 [2024-11-20 05:23:31.230011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.868 [2024-11-20 05:23:31.263352] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.868 [2024-11-20 05:23:31.263406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.868 [2024-11-20 05:23:31.263417] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:16.868 [2024-11-20 05:23:31.263426] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:16.868 [2024-11-20 05:23:31.263433] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.868 [2024-11-20 05:23:31.263740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.868 [2024-11-20 05:23:31.294407] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.800 [2024-11-20 05:23:32.103677] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.800 [2024-11-20 05:23:32.119780] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.800 malloc0 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:17.800 { 00:13:17.800 "params": { 00:13:17.800 "name": "Nvme$subsystem", 00:13:17.800 "trtype": "$TEST_TRANSPORT", 00:13:17.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:17.800 "adrfam": "ipv4", 00:13:17.800 "trsvcid": "$NVMF_PORT", 00:13:17.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:17.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:17.800 "hdgst": ${hdgst:-false}, 00:13:17.800 "ddgst": ${ddgst:-false} 00:13:17.800 }, 00:13:17.800 "method": "bdev_nvme_attach_controller" 00:13:17.800 } 00:13:17.800 EOF 00:13:17.800 )") 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:13:17.800 05:23:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:17.800 "params": { 00:13:17.800 "name": "Nvme1", 00:13:17.800 "trtype": "tcp", 00:13:17.800 "traddr": "10.0.0.3", 00:13:17.800 "adrfam": "ipv4", 00:13:17.800 "trsvcid": "4420", 00:13:17.800 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:17.800 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:17.800 "hdgst": false, 00:13:17.800 "ddgst": false 00:13:17.800 }, 00:13:17.800 "method": "bdev_nvme_attach_controller" 00:13:17.800 }' 00:13:17.800 [2024-11-20 05:23:32.234444] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:13:17.800 [2024-11-20 05:23:32.234578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65634 ] 00:13:18.059 [2024-11-20 05:23:32.466360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.059 [2024-11-20 05:23:32.499442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.059 [2024-11-20 05:23:32.537330] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:18.319 Running I/O for 10 seconds... 00:13:20.191 5863.00 IOPS, 45.80 MiB/s [2024-11-20T05:23:36.080Z] 5876.00 IOPS, 45.91 MiB/s [2024-11-20T05:23:36.646Z] 5887.33 IOPS, 45.99 MiB/s [2024-11-20T05:23:38.022Z] 5882.25 IOPS, 45.96 MiB/s [2024-11-20T05:23:38.956Z] 5821.60 IOPS, 45.48 MiB/s [2024-11-20T05:23:39.890Z] 5791.50 IOPS, 45.25 MiB/s [2024-11-20T05:23:40.823Z] 5741.71 IOPS, 44.86 MiB/s [2024-11-20T05:23:41.757Z] 5732.88 IOPS, 44.79 MiB/s [2024-11-20T05:23:42.691Z] 5742.11 IOPS, 44.86 MiB/s [2024-11-20T05:23:42.691Z] 5723.30 IOPS, 44.71 MiB/s 00:13:28.178 Latency(us) 00:13:28.178 [2024-11-20T05:23:42.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.178 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:28.178 Verification LBA range: start 0x0 length 0x1000 00:13:28.178 Nvme1n1 : 10.02 5726.12 44.74 0.00 0.00 22281.34 2755.49 31933.91 00:13:28.178 [2024-11-20T05:23:42.691Z] =================================================================================================================== 00:13:28.178 [2024-11-20T05:23:42.691Z] Total : 5726.12 44.74 0.00 0.00 22281.34 2755.49 31933.91 00:13:28.437 05:23:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65752 00:13:28.437 05:23:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:13:28.437 05:23:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:28.437 05:23:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:28.437 05:23:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:28.437 05:23:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:13:28.437 05:23:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:13:28.437 05:23:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:28.437 05:23:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:28.437 { 00:13:28.437 "params": { 00:13:28.437 "name": "Nvme$subsystem", 00:13:28.437 "trtype": "$TEST_TRANSPORT", 00:13:28.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:28.437 "adrfam": "ipv4", 00:13:28.437 "trsvcid": "$NVMF_PORT", 00:13:28.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:28.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:28.437 "hdgst": ${hdgst:-false}, 00:13:28.437 "ddgst": ${ddgst:-false} 00:13:28.437 }, 00:13:28.437 "method": "bdev_nvme_attach_controller" 00:13:28.437 } 00:13:28.437 EOF 00:13:28.437 )") 00:13:28.437 05:23:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:13:28.437 [2024-11-20 05:23:42.806927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.437 [2024-11-20 05:23:42.806977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.437 05:23:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:13:28.437 05:23:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:13:28.437 05:23:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:28.437 "params": { 00:13:28.437 "name": "Nvme1", 00:13:28.437 "trtype": "tcp", 00:13:28.437 "traddr": "10.0.0.3", 00:13:28.437 "adrfam": "ipv4", 00:13:28.437 "trsvcid": "4420", 00:13:28.437 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:28.437 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:28.437 "hdgst": false, 00:13:28.437 "ddgst": false 00:13:28.437 }, 00:13:28.437 "method": "bdev_nvme_attach_controller" 00:13:28.437 }' 00:13:28.437 [2024-11-20 05:23:42.814872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.437 [2024-11-20 05:23:42.814923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.437 [2024-11-20 05:23:42.822871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.437 [2024-11-20 05:23:42.822919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.437 [2024-11-20 05:23:42.830875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.437 [2024-11-20 05:23:42.830931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.437 [2024-11-20 05:23:42.842961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.437 [2024-11-20 05:23:42.843014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.437 [2024-11-20 05:23:42.850325] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:13:28.437 [2024-11-20 05:23:42.850425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65752 ] 00:13:28.437 [2024-11-20 05:23:42.854893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.437 [2024-11-20 05:23:42.854954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.437 [2024-11-20 05:23:42.862888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.437 [2024-11-20 05:23:42.862936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.437 [2024-11-20 05:23:42.870896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.437 [2024-11-20 05:23:42.870955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.437 [2024-11-20 05:23:42.878889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.437 [2024-11-20 05:23:42.878934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.437 [2024-11-20 05:23:42.886894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.437 [2024-11-20 05:23:42.886955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.437 [2024-11-20 05:23:42.894888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.437 [2024-11-20 05:23:42.894946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.437 [2024-11-20 05:23:42.902891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.437 [2024-11-20 05:23:42.902948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.437 [2024-11-20 05:23:42.910890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.437 [2024-11-20 05:23:42.910936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.437 [2024-11-20 05:23:42.918897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.437 [2024-11-20 05:23:42.918950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.437 [2024-11-20 05:23:42.930942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.437 [2024-11-20 05:23:42.930985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.437 [2024-11-20 05:23:42.942940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.437 [2024-11-20 05:23:42.942983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.696 [2024-11-20 05:23:42.950945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.696 [2024-11-20 05:23:42.950986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.696 [2024-11-20 05:23:42.958919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.696 [2024-11-20 05:23:42.958953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.696 [2024-11-20 05:23:42.970978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.696 [2024-11-20 05:23:42.971026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.696 [2024-11-20 05:23:42.978964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.696 [2024-11-20 05:23:42.979006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.696 [2024-11-20 05:23:42.986969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.696 [2024-11-20 05:23:42.987009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.696 [2024-11-20 05:23:42.995636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.696 [2024-11-20 05:23:42.998966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.696 [2024-11-20 05:23:42.999006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.696 [2024-11-20 05:23:43.011019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.696 [2024-11-20 05:23:43.011082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.696 [2024-11-20 05:23:43.018962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.696 [2024-11-20 05:23:43.019012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.696 [2024-11-20 05:23:43.027033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.696 [2024-11-20 05:23:43.027082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.696 [2024-11-20 05:23:43.032606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.696 [2024-11-20 05:23:43.034956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.696 [2024-11-20 05:23:43.034993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.696 [2024-11-20 05:23:43.042982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.696 [2024-11-20 05:23:43.043016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.696 [2024-11-20 05:23:43.054992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.696 [2024-11-20 05:23:43.055052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.696 [2024-11-20 05:23:43.067021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.696 [2024-11-20 05:23:43.067088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.696 [2024-11-20 05:23:43.073394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:28.696 [2024-11-20 05:23:43.074983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.696 [2024-11-20 05:23:43.075017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.696 [2024-11-20 05:23:43.086989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.696 [2024-11-20 05:23:43.087033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.696 [2024-11-20 05:23:43.095002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.696 [2024-11-20 05:23:43.095057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.696 [2024-11-20 05:23:43.102974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.696 [2024-11-20 05:23:43.103009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.696 [2024-11-20 05:23:43.111210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.696 [2024-11-20 05:23:43.111248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.696 [2024-11-20 05:23:43.119214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.696 [2024-11-20 05:23:43.119250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.696 [2024-11-20 05:23:43.127256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.696 [2024-11-20 05:23:43.127303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.696 [2024-11-20 05:23:43.135271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.696 [2024-11-20 05:23:43.135308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.696 [2024-11-20 05:23:43.143369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.696 [2024-11-20 05:23:43.143405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.696 [2024-11-20 05:23:43.151269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.696 [2024-11-20 05:23:43.151304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.696 [2024-11-20 05:23:43.159296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.696 [2024-11-20 05:23:43.159332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.696 [2024-11-20 05:23:43.167329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.696 [2024-11-20 05:23:43.167365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.696 [2024-11-20 05:23:43.179341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.696 [2024-11-20 05:23:43.179392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.696 Running I/O for 5 seconds... 00:13:28.696 [2024-11-20 05:23:43.187367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.696 [2024-11-20 05:23:43.187404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.696 [2024-11-20 05:23:43.202107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.696 [2024-11-20 05:23:43.202154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.955 [2024-11-20 05:23:43.214838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.955 [2024-11-20 05:23:43.214919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.955 [2024-11-20 05:23:43.230243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.955 [2024-11-20 05:23:43.230286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.955 [2024-11-20 05:23:43.241525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.955 [2024-11-20 05:23:43.241564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.955 [2024-11-20 05:23:43.253365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.955 [2024-11-20 05:23:43.253408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.955 [2024-11-20 05:23:43.267100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.955 [2024-11-20 05:23:43.267145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.955 [2024-11-20 05:23:43.279391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.955 [2024-11-20 05:23:43.279439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.955 [2024-11-20 05:23:43.293325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.955 [2024-11-20 05:23:43.293369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.955 [2024-11-20 05:23:43.305296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.955 [2024-11-20 05:23:43.305343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.955 [2024-11-20 05:23:43.319809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.955 [2024-11-20 05:23:43.319855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.955 [2024-11-20 05:23:43.334046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.955 [2024-11-20 05:23:43.334094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.955 [2024-11-20 05:23:43.350094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.955 [2024-11-20 05:23:43.350157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.955 [2024-11-20 05:23:43.367862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.955 [2024-11-20 05:23:43.367939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.955 [2024-11-20 05:23:43.377732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.955 [2024-11-20 05:23:43.377780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.955 [2024-11-20 05:23:43.390069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.955 [2024-11-20 05:23:43.390128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.955 [2024-11-20 05:23:43.407122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.955 [2024-11-20 05:23:43.407166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.955 [2024-11-20 05:23:43.419612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.955 [2024-11-20 05:23:43.419676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.955 [2024-11-20 05:23:43.436259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.955 [2024-11-20 05:23:43.436306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.955 [2024-11-20 05:23:43.447962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.955 [2024-11-20 05:23:43.448020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.955 [2024-11-20 05:23:43.460921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.955 [2024-11-20 05:23:43.460970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.213 [2024-11-20 05:23:43.472605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.213 [2024-11-20 05:23:43.472656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.213 [2024-11-20 05:23:43.488690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.213 [2024-11-20 05:23:43.488753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.213 [2024-11-20 05:23:43.505998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.213 [2024-11-20 05:23:43.506055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.213 [2024-11-20 05:23:43.516841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.213 [2024-11-20 05:23:43.516922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.213 [2024-11-20 05:23:43.530553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.213 [2024-11-20 05:23:43.530600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.213 [2024-11-20 05:23:43.544731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.213 [2024-11-20 05:23:43.544776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.213 [2024-11-20 05:23:43.555145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.213 [2024-11-20 05:23:43.555196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.213 [2024-11-20 05:23:43.568320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.213 [2024-11-20 05:23:43.568374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.213 [2024-11-20 05:23:43.580396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.213 [2024-11-20 05:23:43.580442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.213 [2024-11-20 05:23:43.592692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.213 [2024-11-20 05:23:43.592736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.213 [2024-11-20 05:23:43.605309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.213 [2024-11-20 05:23:43.605357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.213 [2024-11-20 05:23:43.621859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.213 [2024-11-20 05:23:43.621942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.213 [2024-11-20 05:23:43.637842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.213 [2024-11-20 05:23:43.637956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.213 [2024-11-20 05:23:43.655461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.213 [2024-11-20 05:23:43.655552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.213 [2024-11-20 05:23:43.669158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.213 [2024-11-20 05:23:43.669246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.213 [2024-11-20 05:23:43.685473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.213 [2024-11-20 05:23:43.685563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.213 [2024-11-20 05:23:43.704382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.213 [2024-11-20 05:23:43.704473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.213 [2024-11-20 05:23:43.718408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.213 [2024-11-20 05:23:43.718488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.472 [2024-11-20 05:23:43.734292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.472 [2024-11-20 05:23:43.734385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.472 [2024-11-20 05:23:43.749323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.472 [2024-11-20 05:23:43.749415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.472 [2024-11-20 05:23:43.767024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.472 [2024-11-20 05:23:43.767117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.472 [2024-11-20 05:23:43.783482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.472 [2024-11-20 05:23:43.783571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.472 [2024-11-20 05:23:43.800346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.472 [2024-11-20 05:23:43.800417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.472 [2024-11-20 05:23:43.816788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.472 [2024-11-20 05:23:43.816862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.472 [2024-11-20 05:23:43.833786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.472 [2024-11-20 05:23:43.833846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.472 [2024-11-20 05:23:43.851099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.472 [2024-11-20 05:23:43.851158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.472 [2024-11-20 05:23:43.867556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.472 [2024-11-20 05:23:43.867632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.472 [2024-11-20 05:23:43.880264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.472 [2024-11-20 05:23:43.880325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.472 [2024-11-20 05:23:43.897007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.472 [2024-11-20 05:23:43.897070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.472 [2024-11-20 05:23:43.910889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.472 [2024-11-20 05:23:43.910961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.472 [2024-11-20 05:23:43.927334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.472 [2024-11-20 05:23:43.927389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.472 [2024-11-20 05:23:43.944333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.472 [2024-11-20 05:23:43.944386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.472 [2024-11-20 05:23:43.960111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.472 [2024-11-20 05:23:43.960167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.472 [2024-11-20 05:23:43.972868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.472 [2024-11-20 05:23:43.972937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.731 [2024-11-20 05:23:43.990983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.731 [2024-11-20 05:23:43.991044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.731 [2024-11-20 05:23:44.004357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.731 [2024-11-20 05:23:44.004412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.731 [2024-11-20 05:23:44.020849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.731 [2024-11-20 05:23:44.020899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.731 [2024-11-20 05:23:44.036338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.731 [2024-11-20 05:23:44.036395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.731 [2024-11-20 05:23:44.046501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.731 [2024-11-20 05:23:44.046554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.731 [2024-11-20 05:23:44.059394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.731 [2024-11-20 05:23:44.059456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.731 [2024-11-20 05:23:44.071275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.731 [2024-11-20 05:23:44.071339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.731 [2024-11-20 05:23:44.087919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.731 [2024-11-20 05:23:44.087981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.731 [2024-11-20 05:23:44.104525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.731 [2024-11-20 05:23:44.104590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.731 [2024-11-20 05:23:44.118974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.731 [2024-11-20 05:23:44.119039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.731 [2024-11-20 05:23:44.135153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.731 [2024-11-20 05:23:44.135199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.731 [2024-11-20 05:23:44.155058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.731 [2024-11-20 05:23:44.155120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.731 [2024-11-20 05:23:44.167533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.731 [2024-11-20 05:23:44.167579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.731 9835.00 IOPS, 76.84 MiB/s [2024-11-20T05:23:44.244Z] [2024-11-20 05:23:44.185071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.731 [2024-11-20 05:23:44.185117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.731 [2024-11-20 05:23:44.198583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.731 [2024-11-20 05:23:44.198644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.731 [2024-11-20 05:23:44.215587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.731 [2024-11-20 05:23:44.215645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.731 [2024-11-20 05:23:44.229987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.731 [2024-11-20 05:23:44.230034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.731 [2024-11-20 05:23:44.240203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.731 [2024-11-20 05:23:44.240244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.989 [2024-11-20 05:23:44.255741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.989 [2024-11-20 05:23:44.255793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.989 [2024-11-20 05:23:44.267449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.989 [2024-11-20 05:23:44.267508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.989 [2024-11-20 05:23:44.280041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.989 [2024-11-20 05:23:44.280084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.989 [2024-11-20 05:23:44.292552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.989 [2024-11-20 05:23:44.292602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.989 [2024-11-20 05:23:44.304915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.989 [2024-11-20 05:23:44.304959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.989 [2024-11-20 05:23:44.317565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.989 [2024-11-20 05:23:44.317612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.989 [2024-11-20 05:23:44.329134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.989 [2024-11-20 05:23:44.329185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.989 [2024-11-20 05:23:44.341442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.989 [2024-11-20 05:23:44.341490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.989 [2024-11-20 05:23:44.353145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.989 [2024-11-20 05:23:44.353191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.989 [2024-11-20 05:23:44.364574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.989 [2024-11-20 05:23:44.364621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.989 [2024-11-20 05:23:44.376539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.989 [2024-11-20 05:23:44.376591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.989 [2024-11-20 05:23:44.389016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.989 [2024-11-20 05:23:44.389067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.989 [2024-11-20 05:23:44.402270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.989 [2024-11-20 05:23:44.402337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.989 [2024-11-20 05:23:44.420871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.989 [2024-11-20 05:23:44.420932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.989 [2024-11-20 05:23:44.431827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.989 [2024-11-20 05:23:44.431871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.989 [2024-11-20 05:23:44.443764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.989 [2024-11-20 05:23:44.443817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.989 [2024-11-20 05:23:44.455875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.989 [2024-11-20 05:23:44.455942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.989 [2024-11-20 05:23:44.468876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.989 [2024-11-20 05:23:44.468940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.989 [2024-11-20 05:23:44.486688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.989 [2024-11-20 05:23:44.486739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.989 [2024-11-20 05:23:44.497261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.989 [2024-11-20 05:23:44.497312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.247 [2024-11-20 05:23:44.513517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.247 [2024-11-20 05:23:44.513593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.247 [2024-11-20 05:23:44.528016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.247 [2024-11-20 05:23:44.528078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.247 [2024-11-20 05:23:44.538538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.247 [2024-11-20 05:23:44.538602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.247 [2024-11-20 05:23:44.551581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.247 [2024-11-20 05:23:44.551673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.247 [2024-11-20 05:23:44.567019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.247 [2024-11-20 05:23:44.567067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.247 [2024-11-20 05:23:44.584711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.247 [2024-11-20 05:23:44.584763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.247 [2024-11-20 05:23:44.600596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.247 [2024-11-20 05:23:44.600668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.247 [2024-11-20 05:23:44.610670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.247 [2024-11-20 05:23:44.610716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.247 [2024-11-20 05:23:44.627472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.247 [2024-11-20 05:23:44.627525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.247 [2024-11-20 05:23:44.639388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.247 [2024-11-20 05:23:44.639428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.247 [2024-11-20 05:23:44.656096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.247 [2024-11-20 05:23:44.656146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.247 [2024-11-20 05:23:44.669584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.247 [2024-11-20 05:23:44.669630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.247 [2024-11-20 05:23:44.686661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.247 [2024-11-20 05:23:44.686732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.247 [2024-11-20 05:23:44.703798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.247 [2024-11-20 05:23:44.703884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.247 [2024-11-20 05:23:44.717234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.247 [2024-11-20 05:23:44.717316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.247 [2024-11-20 05:23:44.735724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.247 [2024-11-20 05:23:44.735819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.247 [2024-11-20 05:23:44.749588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.247 [2024-11-20 05:23:44.749662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.505 [2024-11-20 05:23:44.767712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.505 [2024-11-20 05:23:44.767795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.505 [2024-11-20 05:23:44.782367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.505 [2024-11-20 05:23:44.782453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.505 [2024-11-20 05:23:44.800667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.505 [2024-11-20 05:23:44.800752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.505 [2024-11-20 05:23:44.815183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.505 [2024-11-20 05:23:44.815262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.505 [2024-11-20 05:23:44.833346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.505 [2024-11-20 05:23:44.833433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.505 [2024-11-20 05:23:44.847150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.505 [2024-11-20 05:23:44.847214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.506 [2024-11-20 05:23:44.865651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.506 [2024-11-20 05:23:44.865734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.506 [2024-11-20 05:23:44.879511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.506 [2024-11-20 05:23:44.879577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.506 [2024-11-20 05:23:44.894247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.506 [2024-11-20 05:23:44.894310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.506 [2024-11-20 05:23:44.909316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.506 [2024-11-20 05:23:44.909377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.506 [2024-11-20 05:23:44.924555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.506 [2024-11-20 05:23:44.924621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.506 [2024-11-20 05:23:44.937818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.506 [2024-11-20 05:23:44.937878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.506 [2024-11-20 05:23:44.955739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.506 [2024-11-20 05:23:44.955809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.506 [2024-11-20 05:23:44.969391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.506 [2024-11-20 05:23:44.969450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.506 [2024-11-20 05:23:44.985099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.506 [2024-11-20 05:23:44.985154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.506 [2024-11-20 05:23:44.999438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.506 [2024-11-20 05:23:44.999493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.506 [2024-11-20 05:23:45.016140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.506 [2024-11-20 05:23:45.016195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.764 [2024-11-20 05:23:45.030348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.764 [2024-11-20 05:23:45.030409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.764 [2024-11-20 05:23:45.045217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.764 [2024-11-20 05:23:45.045270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.764 [2024-11-20 05:23:45.063049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.764 [2024-11-20 05:23:45.063104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.764 [2024-11-20 05:23:45.079411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.764 [2024-11-20 05:23:45.079454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.764 [2024-11-20 05:23:45.093650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.764 [2024-11-20 05:23:45.093701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.764 [2024-11-20 05:23:45.112501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.764 [2024-11-20 05:23:45.112554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.764 [2024-11-20 05:23:45.153048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.764 [2024-11-20 05:23:45.153114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.764 [2024-11-20 05:23:45.164975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.764 [2024-11-20 05:23:45.165015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.764 [2024-11-20 05:23:45.175794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.764 [2024-11-20 05:23:45.175832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.764 9504.50 IOPS, 74.25 MiB/s [2024-11-20T05:23:45.277Z] [2024-11-20 05:23:45.186229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.764 [2024-11-20 05:23:45.186266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.764 [2024-11-20 05:23:45.197810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.764 [2024-11-20 05:23:45.197851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.764 [2024-11-20 05:23:45.209327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.764 [2024-11-20 05:23:45.209362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.764 [2024-11-20 05:23:45.222815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.764 [2024-11-20 05:23:45.222854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.764 [2024-11-20 05:23:45.239632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.764 [2024-11-20 05:23:45.239671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.764 [2024-11-20 05:23:45.249474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.764 [2024-11-20 05:23:45.249510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.764 [2024-11-20 05:23:45.261527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.764 [2024-11-20 05:23:45.261571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.764 [2024-11-20 05:23:45.272821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.764 [2024-11-20 05:23:45.272859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.022 [2024-11-20 05:23:45.286131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.022 [2024-11-20 05:23:45.286167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.022 [2024-11-20 05:23:45.296157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.022 [2024-11-20 05:23:45.296194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.022 [2024-11-20 05:23:45.307926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.022 [2024-11-20 05:23:45.307966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.022 [2024-11-20 05:23:45.323769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.022 [2024-11-20 05:23:45.323809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.022 [2024-11-20 05:23:45.334195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.022 [2024-11-20 05:23:45.334249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.022 [2024-11-20 05:23:45.346265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.022 [2024-11-20 05:23:45.346325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.022 [2024-11-20 05:23:45.361883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.022 [2024-11-20 05:23:45.361963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.022 [2024-11-20 05:23:45.377694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.022 [2024-11-20 05:23:45.377751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.022 [2024-11-20 05:23:45.387781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.022 [2024-11-20 05:23:45.387834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.022 [2024-11-20 05:23:45.403119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.022 [2024-11-20 05:23:45.403172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.022 [2024-11-20 05:23:45.413297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.022 [2024-11-20 05:23:45.413349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.022 [2024-11-20 05:23:45.425375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.022 [2024-11-20 05:23:45.425422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.022 [2024-11-20 05:23:45.436563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.022 [2024-11-20 05:23:45.436598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.022 [2024-11-20 05:23:45.449712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.022 [2024-11-20 05:23:45.449752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.022 [2024-11-20 05:23:45.459559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.022 [2024-11-20 05:23:45.459593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.022 [2024-11-20 05:23:45.474821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.022 [2024-11-20 05:23:45.474855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.022 [2024-11-20 05:23:45.484756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.022 [2024-11-20 05:23:45.484788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.022 [2024-11-20 05:23:45.496839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.022 [2024-11-20 05:23:45.496876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.022 [2024-11-20 05:23:45.507851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.022 [2024-11-20 05:23:45.507894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.022 [2024-11-20 05:23:45.523057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.022 [2024-11-20 05:23:45.523096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.022 [2024-11-20 05:23:45.533641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.022 [2024-11-20 05:23:45.533679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.280 [2024-11-20 05:23:45.545298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.280 [2024-11-20 05:23:45.545333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.280 [2024-11-20 05:23:45.556509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.281 [2024-11-20 05:23:45.556544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.281 [2024-11-20 05:23:45.567398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.281 [2024-11-20 05:23:45.567432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.281 [2024-11-20 05:23:45.580685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.281 [2024-11-20 05:23:45.580718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.281 [2024-11-20 05:23:45.597668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.281 [2024-11-20 05:23:45.597703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.281 [2024-11-20 05:23:45.615427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.281 [2024-11-20 05:23:45.615464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.281 [2024-11-20 05:23:45.631406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.281 [2024-11-20 05:23:45.631444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.281 [2024-11-20 05:23:45.641524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.281 [2024-11-20 05:23:45.641559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.281 [2024-11-20 05:23:45.656953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.281 [2024-11-20 05:23:45.656988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.281 [2024-11-20 05:23:45.674277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.281 [2024-11-20 05:23:45.674313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.281 [2024-11-20 05:23:45.684592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.281 [2024-11-20 05:23:45.684631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.281 [2024-11-20 05:23:45.696456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.281 [2024-11-20 05:23:45.696497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.281 [2024-11-20 05:23:45.711566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.281 [2024-11-20 05:23:45.711602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.281 [2024-11-20 05:23:45.729782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.281 [2024-11-20 05:23:45.729820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.281 [2024-11-20 05:23:45.745032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.281 [2024-11-20 05:23:45.745065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.281 [2024-11-20 05:23:45.762248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.281 [2024-11-20 05:23:45.762284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.281 [2024-11-20 05:23:45.772556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.281 [2024-11-20 05:23:45.772591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.281 [2024-11-20 05:23:45.787417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.281 [2024-11-20 05:23:45.787472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.539 [2024-11-20 05:23:45.804228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.539 [2024-11-20 05:23:45.804295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.539 [2024-11-20 05:23:45.821849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.539 [2024-11-20 05:23:45.821922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.539 [2024-11-20 05:23:45.837059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.539 [2024-11-20 05:23:45.837120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.539 [2024-11-20 05:23:45.853143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.539 [2024-11-20 05:23:45.853199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.539 [2024-11-20 05:23:45.871322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.539 [2024-11-20 05:23:45.871368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.539 [2024-11-20 05:23:45.886304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.539 [2024-11-20 05:23:45.886357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.539 [2024-11-20 05:23:45.895729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.539 [2024-11-20 05:23:45.895763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.539 [2024-11-20 05:23:45.911837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.539 [2024-11-20 05:23:45.911872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.539 [2024-11-20 05:23:45.929483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.539 [2024-11-20 05:23:45.929521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.539 [2024-11-20 05:23:45.944641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.539 [2024-11-20 05:23:45.944679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.539 [2024-11-20 05:23:45.954430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.539 [2024-11-20 05:23:45.954465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.539 [2024-11-20 05:23:45.970863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.539 [2024-11-20 05:23:45.970914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.539 [2024-11-20 05:23:45.988068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.539 [2024-11-20 05:23:45.988101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.539 [2024-11-20 05:23:45.998608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.539 [2024-11-20 05:23:45.998644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.539 [2024-11-20 05:23:46.010603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.539 [2024-11-20 05:23:46.010641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.539 [2024-11-20 05:23:46.021810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.539 [2024-11-20 05:23:46.021845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.539 [2024-11-20 05:23:46.038359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.539 [2024-11-20 05:23:46.038395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.797 [2024-11-20 05:23:46.054752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.797 [2024-11-20 05:23:46.054792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.797 [2024-11-20 05:23:46.064684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.797 [2024-11-20 05:23:46.064721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.797 [2024-11-20 05:23:46.079845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.797 [2024-11-20 05:23:46.079881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.797 [2024-11-20 05:23:46.095422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.797 [2024-11-20 05:23:46.095458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.797 [2024-11-20 05:23:46.114042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.797 [2024-11-20 05:23:46.114082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.797 [2024-11-20 05:23:46.129610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.797 [2024-11-20 05:23:46.129648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.797 [2024-11-20 05:23:46.139805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.797 [2024-11-20 05:23:46.139844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.797 [2024-11-20 05:23:46.152487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.797 [2024-11-20 05:23:46.152520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.797 [2024-11-20 05:23:46.168631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.797 [2024-11-20 05:23:46.168668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.797 10086.00 IOPS, 78.80 MiB/s [2024-11-20T05:23:46.310Z] [2024-11-20 05:23:46.184608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.797 [2024-11-20 05:23:46.184647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.797 [2024-11-20 05:23:46.200816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.797 [2024-11-20 05:23:46.200858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.798 [2024-11-20 05:23:46.210763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.798 [2024-11-20 05:23:46.210801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.798 [2024-11-20 05:23:46.226998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.798 [2024-11-20 05:23:46.227042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.798 [2024-11-20 05:23:46.242883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.798 [2024-11-20 05:23:46.242945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.798 [2024-11-20 05:23:46.252553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.798 [2024-11-20 05:23:46.252593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.798 [2024-11-20 05:23:46.264975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.798 [2024-11-20 05:23:46.265021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.798 [2024-11-20 05:23:46.276595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.798 [2024-11-20 05:23:46.276634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.798 [2024-11-20 05:23:46.288223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.798 [2024-11-20 05:23:46.288263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.798 [2024-11-20 05:23:46.303928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.798 [2024-11-20 05:23:46.303969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.056 [2024-11-20 05:23:46.314500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.056 [2024-11-20 05:23:46.314543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.056 [2024-11-20 05:23:46.326542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.056 [2024-11-20 05:23:46.326581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.056 [2024-11-20 05:23:46.337932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.056 [2024-11-20 05:23:46.338006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.056 [2024-11-20 05:23:46.351478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.056 [2024-11-20 05:23:46.351523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.056 [2024-11-20 05:23:46.361780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.056 [2024-11-20 05:23:46.361823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.056 [2024-11-20 05:23:46.373705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.056 [2024-11-20 05:23:46.373749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.056 [2024-11-20 05:23:46.384914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.056 [2024-11-20 05:23:46.384957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.056 [2024-11-20 05:23:46.396253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.056 [2024-11-20 05:23:46.396297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.056 [2024-11-20 05:23:46.407547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.056 [2024-11-20 05:23:46.407598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.056 [2024-11-20 05:23:46.418863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.056 [2024-11-20 05:23:46.418917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.056 [2024-11-20 05:23:46.430612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.056 [2024-11-20 05:23:46.430652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.056 [2024-11-20 05:23:46.442178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.056 [2024-11-20 05:23:46.442216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.056 [2024-11-20 05:23:46.454799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.056 [2024-11-20 05:23:46.454836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.056 [2024-11-20 05:23:46.467705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.056 [2024-11-20 05:23:46.467745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.056 [2024-11-20 05:23:46.482873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.056 [2024-11-20 05:23:46.482927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.056 [2024-11-20 05:23:46.499698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.056 [2024-11-20 05:23:46.499758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.056 [2024-11-20 05:23:46.516337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.056 [2024-11-20 05:23:46.516379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.057 [2024-11-20 05:23:46.526208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.057 [2024-11-20 05:23:46.526246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.057 [2024-11-20 05:23:46.541865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.057 [2024-11-20 05:23:46.541927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.057 [2024-11-20 05:23:46.559297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.057 [2024-11-20 05:23:46.559359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.315 [2024-11-20 05:23:46.572974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.315 [2024-11-20 05:23:46.573028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.315 [2024-11-20 05:23:46.588350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.315 [2024-11-20 05:23:46.588398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.315 [2024-11-20 05:23:46.603511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.315 [2024-11-20 05:23:46.603560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.315 [2024-11-20 05:23:46.617633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.315 [2024-11-20 05:23:46.617683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.315 [2024-11-20 05:23:46.635278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.315 [2024-11-20 05:23:46.635322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.315 [2024-11-20 05:23:46.647888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.315 [2024-11-20 05:23:46.647947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.315 [2024-11-20 05:23:46.663614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.315 [2024-11-20 05:23:46.663663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.315 [2024-11-20 05:23:46.673426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.315 [2024-11-20 05:23:46.673462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.315 [2024-11-20 05:23:46.685637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.315 [2024-11-20 05:23:46.685677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.315 [2024-11-20 05:23:46.696929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.315 [2024-11-20 05:23:46.696962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.315 [2024-11-20 05:23:46.713385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.315 [2024-11-20 05:23:46.713432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.315 [2024-11-20 05:23:46.725037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.315 [2024-11-20 05:23:46.725073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.315 [2024-11-20 05:23:46.737677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.315 [2024-11-20 05:23:46.737722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.315 [2024-11-20 05:23:46.749442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.315 [2024-11-20 05:23:46.749480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.315 [2024-11-20 05:23:46.766697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.315 [2024-11-20 05:23:46.766737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.315 [2024-11-20 05:23:46.783585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.315 [2024-11-20 05:23:46.783632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.315 [2024-11-20 05:23:46.793513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.315 [2024-11-20 05:23:46.793553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.315 [2024-11-20 05:23:46.807274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.315 [2024-11-20 05:23:46.807316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.315 [2024-11-20 05:23:46.818497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.315 [2024-11-20 05:23:46.818535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.575 [2024-11-20 05:23:46.835845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.575 [2024-11-20 05:23:46.835893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.575 [2024-11-20 05:23:46.845568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.575 [2024-11-20 05:23:46.845608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.575 [2024-11-20 05:23:46.857628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.575 [2024-11-20 05:23:46.857667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.575 [2024-11-20 05:23:46.872834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.575 [2024-11-20 05:23:46.872877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.575 [2024-11-20 05:23:46.889074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.575 [2024-11-20 05:23:46.889119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.575 [2024-11-20 05:23:46.909079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.575 [2024-11-20 05:23:46.909119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.575 [2024-11-20 05:23:46.920722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.575 [2024-11-20 05:23:46.920759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.575 [2024-11-20 05:23:46.936820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.575 [2024-11-20 05:23:46.936857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.575 [2024-11-20 05:23:46.953199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.575 [2024-11-20 05:23:46.953239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.575 [2024-11-20 05:23:46.969594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.575 [2024-11-20 05:23:46.969630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.575 [2024-11-20 05:23:46.985880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.575 [2024-11-20 05:23:46.985946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.575 [2024-11-20 05:23:47.004114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.575 [2024-11-20 05:23:47.004164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.575 [2024-11-20 05:23:47.018941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.575 [2024-11-20 05:23:47.019012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.575 [2024-11-20 05:23:47.029125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.575 [2024-11-20 05:23:47.029183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.575 [2024-11-20 05:23:47.045917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.575 [2024-11-20 05:23:47.045970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.575 [2024-11-20 05:23:47.061277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.575 [2024-11-20 05:23:47.061336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.575 [2024-11-20 05:23:47.077556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.575 [2024-11-20 05:23:47.077599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.833 [2024-11-20 05:23:47.093762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.833 [2024-11-20 05:23:47.093811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.833 [2024-11-20 05:23:47.110802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.833 [2024-11-20 05:23:47.110843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.833 [2024-11-20 05:23:47.121277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.833 [2024-11-20 05:23:47.121322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.833 [2024-11-20 05:23:47.137138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.833 [2024-11-20 05:23:47.137184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.833 [2024-11-20 05:23:47.151505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.833 [2024-11-20 05:23:47.151547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.833 [2024-11-20 05:23:47.167571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.833 [2024-11-20 05:23:47.167612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.833 [2024-11-20 05:23:47.183370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.833 [2024-11-20 05:23:47.183420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.833 10252.75 IOPS, 80.10 MiB/s [2024-11-20T05:23:47.346Z] [2024-11-20 05:23:47.193663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.833 [2024-11-20 05:23:47.193698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.833 [2024-11-20 05:23:47.209551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.833 [2024-11-20 05:23:47.209593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.833 [2024-11-20 05:23:47.225966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.833 [2024-11-20 05:23:47.226001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.833 [2024-11-20 05:23:47.243087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.833 [2024-11-20 05:23:47.243119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.833 [2024-11-20 05:23:47.259022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.833 [2024-11-20 05:23:47.259055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.833 [2024-11-20 05:23:47.275972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.833 [2024-11-20 05:23:47.276007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.833 [2024-11-20 05:23:47.292527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.833 [2024-11-20 05:23:47.292560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.833 [2024-11-20 05:23:47.309044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.833 [2024-11-20 05:23:47.309074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.833 [2024-11-20 05:23:47.325306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.833 [2024-11-20 05:23:47.325342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.833 [2024-11-20 05:23:47.342856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.833 [2024-11-20 05:23:47.342890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.092 [2024-11-20 05:23:47.357716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.092 [2024-11-20 05:23:47.357749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.092 [2024-11-20 05:23:47.375432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.092 [2024-11-20 05:23:47.375466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.092 [2024-11-20 05:23:47.390446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.092 [2024-11-20 05:23:47.390503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.092 [2024-11-20 05:23:47.400394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.092 [2024-11-20 05:23:47.400433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.092 [2024-11-20 05:23:47.417050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.092 [2024-11-20 05:23:47.417090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.092 [2024-11-20 05:23:47.450404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.092 [2024-11-20 05:23:47.450479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.092 [2024-11-20 05:23:47.493002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.092 [2024-11-20 05:23:47.493074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.092 [2024-11-20 05:23:47.531818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.092 [2024-11-20 05:23:47.531876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.092 [2024-11-20 05:23:47.564028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.092 [2024-11-20 05:23:47.564088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.092 [2024-11-20 05:23:47.579893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.092 [2024-11-20 05:23:47.579961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.092 [2024-11-20 05:23:47.597254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.092 [2024-11-20 05:23:47.597307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.350 [2024-11-20 05:23:47.613023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.350 [2024-11-20 05:23:47.613060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.350 [2024-11-20 05:23:47.622973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.350 [2024-11-20 05:23:47.623007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.350 [2024-11-20 05:23:47.635156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.350 [2024-11-20 05:23:47.635191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.350 [2024-11-20 05:23:47.646881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.350 [2024-11-20 05:23:47.646927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.350 [2024-11-20 05:23:47.662578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.350 [2024-11-20 05:23:47.662614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.350 [2024-11-20 05:23:47.679334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.350 [2024-11-20 05:23:47.679382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.350 [2024-11-20 05:23:47.689813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.350 [2024-11-20 05:23:47.689853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.350 [2024-11-20 05:23:47.704285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.350 [2024-11-20 05:23:47.704322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.350 [2024-11-20 05:23:47.720700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.350 [2024-11-20 05:23:47.720737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.350 [2024-11-20 05:23:47.730701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.350 [2024-11-20 05:23:47.730735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.350 [2024-11-20 05:23:47.746360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.350 [2024-11-20 05:23:47.746396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.350 [2024-11-20 05:23:47.762483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.350 [2024-11-20 05:23:47.762524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.350 [2024-11-20 05:23:47.779162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.350 [2024-11-20 05:23:47.779199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.350 [2024-11-20 05:23:47.795517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.350 [2024-11-20 05:23:47.795553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.350 [2024-11-20 05:23:47.813808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.350 [2024-11-20 05:23:47.813849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.350 [2024-11-20 05:23:47.829058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.350 [2024-11-20 05:23:47.829093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.350 [2024-11-20 05:23:47.838686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.350 [2024-11-20 05:23:47.838726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.350 [2024-11-20 05:23:47.850999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.350 [2024-11-20 05:23:47.851033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.608 [2024-11-20 05:23:47.867034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.608 [2024-11-20 05:23:47.867071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.608 [2024-11-20 05:23:47.877200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.608 [2024-11-20 05:23:47.877238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.608 [2024-11-20 05:23:47.893873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.608 [2024-11-20 05:23:47.893949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.608 [2024-11-20 05:23:47.909765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.608 [2024-11-20 05:23:47.909821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.608 [2024-11-20 05:23:47.925159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.608 [2024-11-20 05:23:47.925213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.608 [2024-11-20 05:23:47.940403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.608 [2024-11-20 05:23:47.940452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.608 [2024-11-20 05:23:47.955482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.608 [2024-11-20 05:23:47.955531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.608 [2024-11-20 05:23:47.970525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.608 [2024-11-20 05:23:47.970567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.608 [2024-11-20 05:23:47.985374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.608 [2024-11-20 05:23:47.985419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.608 [2024-11-20 05:23:48.001159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.608 [2024-11-20 05:23:48.001208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.608 [2024-11-20 05:23:48.014174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.608 [2024-11-20 05:23:48.014215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.608 [2024-11-20 05:23:48.029650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.608 [2024-11-20 05:23:48.029702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.608 [2024-11-20 05:23:48.048984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.608 [2024-11-20 05:23:48.049038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.608 [2024-11-20 05:23:48.063497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.608 [2024-11-20 05:23:48.063542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.608 [2024-11-20 05:23:48.079564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.608 [2024-11-20 05:23:48.079615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.608 [2024-11-20 05:23:48.094492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.608 [2024-11-20 05:23:48.094542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.608 [2024-11-20 05:23:48.110454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.608 [2024-11-20 05:23:48.110501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.868 [2024-11-20 05:23:48.130486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.868 [2024-11-20 05:23:48.130535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.868 [2024-11-20 05:23:48.144817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.868 [2024-11-20 05:23:48.144866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.868 [2024-11-20 05:23:48.161468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.868 [2024-11-20 05:23:48.161520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.868 [2024-11-20 05:23:48.177730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.868 [2024-11-20 05:23:48.177782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.868 10049.00 IOPS, 78.51 MiB/s [2024-11-20T05:23:48.381Z] [2024-11-20 05:23:48.193539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.868 [2024-11-20 05:23:48.193578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.868 00:13:33.868 Latency(us) 00:13:33.868 [2024-11-20T05:23:48.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.868 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:33.868 Nvme1n1 : 5.01 10049.78 78.51 0.00 0.00 12717.11 4230.05 72923.69 00:13:33.868 [2024-11-20T05:23:48.381Z] =================================================================================================================== 00:13:33.868 [2024-11-20T05:23:48.381Z] Total : 10049.78 78.51 0.00 0.00 12717.11 4230.05 72923.69 00:13:33.868 [2024-11-20 05:23:48.198034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.868 [2024-11-20 05:23:48.198068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.868 [2024-11-20 05:23:48.210027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.868 [2024-11-20 05:23:48.210076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.868 [2024-11-20 05:23:48.218047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.868 [2024-11-20 05:23:48.218083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.868 [2024-11-20 05:23:48.230087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.868 [2024-11-20 05:23:48.230142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.868 [2024-11-20 05:23:48.242148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.868 [2024-11-20 05:23:48.242226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.868 [2024-11-20 05:23:48.250079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.868 [2024-11-20 05:23:48.250124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.868 [2024-11-20 05:23:48.258119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.868 [2024-11-20 05:23:48.258178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.868 [2024-11-20 05:23:48.266118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.868 [2024-11-20 05:23:48.266176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.868 [2024-11-20 05:23:48.274124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.868 [2024-11-20 05:23:48.274185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.868 [2024-11-20 05:23:48.286112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.868 [2024-11-20 05:23:48.286168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.868 [2024-11-20 05:23:48.298098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.868 [2024-11-20 05:23:48.298142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.868 [2024-11-20 05:23:48.310119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.868 [2024-11-20 05:23:48.310181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.868 [2024-11-20 05:23:48.318119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.868 [2024-11-20 05:23:48.318168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.868 [2024-11-20 05:23:48.326089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.868 [2024-11-20 05:23:48.326131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.868 [2024-11-20 05:23:48.338094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.868 [2024-11-20 05:23:48.338134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.868 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65752) - No such process 00:13:33.868 05:23:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65752 00:13:33.868 05:23:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.868 05:23:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.868 05:23:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:33.868 05:23:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.868 05:23:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:33.868 05:23:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.868 05:23:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:33.868 delay0 00:13:33.868 05:23:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.868 05:23:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:33.868 05:23:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.868 05:23:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:33.868 05:23:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.868 05:23:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:13:34.126 [2024-11-20 05:23:48.542823] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:40.685 Initializing NVMe Controllers 00:13:40.685 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:40.685 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:40.685 Initialization complete. Launching workers. 00:13:40.685 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 65 00:13:40.685 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 352, failed to submit 33 00:13:40.685 success 239, unsuccessful 113, failed 0 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:40.685 rmmod nvme_tcp 00:13:40.685 rmmod nvme_fabrics 00:13:40.685 rmmod nvme_keyring 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65601 ']' 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65601 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 65601 ']' 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 65601 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65601 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:40.685 killing process with pid 65601 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65601' 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 65601 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 65601 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:40.685 05:23:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:40.685 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:40.685 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:40.685 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:40.685 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:40.685 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:40.685 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.685 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:40.685 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.685 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:13:40.685 ************************************ 00:13:40.685 END TEST nvmf_zcopy 00:13:40.685 ************************************ 00:13:40.685 00:13:40.685 real 0m24.775s 00:13:40.685 user 0m40.196s 00:13:40.685 sys 0m6.602s 00:13:40.685 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:40.685 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:40.685 05:23:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:40.685 05:23:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:40.685 05:23:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:40.685 05:23:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:40.685 ************************************ 00:13:40.685 START TEST nvmf_nmic 00:13:40.685 ************************************ 00:13:40.685 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:40.989 * Looking for test storage... 00:13:40.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:40.989 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:40.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.990 --rc genhtml_branch_coverage=1 00:13:40.990 --rc genhtml_function_coverage=1 00:13:40.990 --rc genhtml_legend=1 00:13:40.990 --rc geninfo_all_blocks=1 00:13:40.990 --rc geninfo_unexecuted_blocks=1 00:13:40.990 00:13:40.990 ' 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:40.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.990 --rc genhtml_branch_coverage=1 00:13:40.990 --rc genhtml_function_coverage=1 00:13:40.990 --rc genhtml_legend=1 00:13:40.990 --rc geninfo_all_blocks=1 00:13:40.990 --rc geninfo_unexecuted_blocks=1 00:13:40.990 00:13:40.990 ' 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:40.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.990 --rc genhtml_branch_coverage=1 00:13:40.990 --rc genhtml_function_coverage=1 00:13:40.990 --rc genhtml_legend=1 00:13:40.990 --rc geninfo_all_blocks=1 00:13:40.990 --rc geninfo_unexecuted_blocks=1 00:13:40.990 00:13:40.990 ' 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:40.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.990 --rc genhtml_branch_coverage=1 00:13:40.990 --rc genhtml_function_coverage=1 00:13:40.990 --rc genhtml_legend=1 00:13:40.990 --rc geninfo_all_blocks=1 00:13:40.990 --rc geninfo_unexecuted_blocks=1 00:13:40.990 00:13:40.990 ' 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:40.990 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:40.990 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:40.991 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:40.991 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:40.991 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:40.991 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:40.991 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:40.991 Cannot find device "nvmf_init_br" 00:13:40.991 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:13:40.991 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:40.991 Cannot find device "nvmf_init_br2" 00:13:40.991 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:13:40.991 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:40.991 Cannot find device "nvmf_tgt_br" 00:13:40.991 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:13:40.991 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:40.991 Cannot find device "nvmf_tgt_br2" 00:13:40.991 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:13:40.991 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:40.991 Cannot find device "nvmf_init_br" 00:13:40.991 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:13:40.991 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:40.991 Cannot find device "nvmf_init_br2" 00:13:40.991 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:13:40.991 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:40.991 Cannot find device "nvmf_tgt_br" 00:13:40.991 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:13:40.991 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:40.991 Cannot find device "nvmf_tgt_br2" 00:13:40.991 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:13:40.991 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:40.991 Cannot find device "nvmf_br" 00:13:40.991 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:13:40.991 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:41.250 Cannot find device "nvmf_init_if" 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:41.250 Cannot find device "nvmf_init_if2" 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:41.250 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:41.250 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:41.250 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:41.250 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:13:41.250 00:13:41.250 --- 10.0.0.3 ping statistics --- 00:13:41.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.250 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:41.250 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:41.250 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:13:41.250 00:13:41.250 --- 10.0.0.4 ping statistics --- 00:13:41.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.250 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:13:41.250 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:41.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:41.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:13:41.250 00:13:41.250 --- 10.0.0.1 ping statistics --- 00:13:41.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.250 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:13:41.509 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:41.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:41.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:13:41.509 00:13:41.509 --- 10.0.0.2 ping statistics --- 00:13:41.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.509 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:13:41.509 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:41.509 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:13:41.509 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:41.509 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:41.509 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:41.509 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:41.509 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:41.509 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:41.509 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:41.509 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:41.509 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:41.509 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:41.509 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:41.509 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=66127 00:13:41.509 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 66127 00:13:41.509 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:41.509 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 66127 ']' 00:13:41.509 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.509 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:41.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.509 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.509 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:41.509 05:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:41.509 [2024-11-20 05:23:55.851104] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:13:41.509 [2024-11-20 05:23:55.851186] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:41.509 [2024-11-20 05:23:56.000371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:41.769 [2024-11-20 05:23:56.034728] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:41.769 [2024-11-20 05:23:56.034781] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:41.769 [2024-11-20 05:23:56.034793] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:41.769 [2024-11-20 05:23:56.034802] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:41.769 [2024-11-20 05:23:56.034809] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:41.769 [2024-11-20 05:23:56.035538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.769 [2024-11-20 05:23:56.035670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.769 [2024-11-20 05:23:56.035729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:41.769 [2024-11-20 05:23:56.035731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.769 [2024-11-20 05:23:56.065402] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:41.769 [2024-11-20 05:23:56.154388] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:41.769 Malloc0 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:41.769 [2024-11-20 05:23:56.215564] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.769 test case1: single bdev can't be used in multiple subsystems 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.769 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:41.769 [2024-11-20 05:23:56.243432] bdev.c:8321:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:41.769 [2024-11-20 05:23:56.243473] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:41.769 [2024-11-20 05:23:56.243485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.769 request: 00:13:41.769 { 00:13:41.769 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:41.769 "namespace": { 00:13:41.769 "bdev_name": "Malloc0", 00:13:41.769 "no_auto_visible": false 00:13:41.769 }, 00:13:41.769 "method": "nvmf_subsystem_add_ns", 00:13:41.769 "req_id": 1 00:13:41.769 } 00:13:41.769 Got JSON-RPC error response 00:13:41.769 response: 00:13:41.769 { 00:13:41.769 "code": -32602, 00:13:41.769 "message": "Invalid parameters" 00:13:41.769 } 00:13:41.770 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:41.770 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:41.770 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:41.770 Adding namespace failed - expected result. 00:13:41.770 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:41.770 test case2: host connect to nvmf target in multiple paths 00:13:41.770 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:41.770 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:13:41.770 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.770 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:41.770 [2024-11-20 05:23:56.255556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:13:41.770 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.770 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid=4bd82fc4-6e19-4d22-95c5-23a13095cd93 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:13:42.028 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid=4bd82fc4-6e19-4d22-95c5-23a13095cd93 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:13:42.028 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:42.028 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:13:42.028 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:42.028 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:42.028 05:23:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:13:44.557 05:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:44.557 05:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:44.557 05:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:44.557 05:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:44.557 05:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:44.557 05:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:13:44.557 05:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:44.557 [global] 00:13:44.557 thread=1 00:13:44.557 invalidate=1 00:13:44.557 rw=write 00:13:44.557 time_based=1 00:13:44.557 runtime=1 00:13:44.557 ioengine=libaio 00:13:44.557 direct=1 00:13:44.557 bs=4096 00:13:44.557 iodepth=1 00:13:44.557 norandommap=0 00:13:44.557 numjobs=1 00:13:44.557 00:13:44.557 verify_dump=1 00:13:44.557 verify_backlog=512 00:13:44.557 verify_state_save=0 00:13:44.557 do_verify=1 00:13:44.557 verify=crc32c-intel 00:13:44.557 [job0] 00:13:44.557 filename=/dev/nvme0n1 00:13:44.557 Could not set queue depth (nvme0n1) 00:13:44.557 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:44.557 fio-3.35 00:13:44.557 Starting 1 thread 00:13:45.492 00:13:45.492 job0: (groupid=0, jobs=1): err= 0: pid=66205: Wed Nov 20 05:23:59 2024 00:13:45.492 read: IOPS=2736, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1001msec) 00:13:45.492 slat (nsec): min=11961, max=61399, avg=17729.76, stdev=5384.35 00:13:45.492 clat (usec): min=147, max=293, avg=187.80, stdev=17.40 00:13:45.492 lat (usec): min=163, max=323, avg=205.53, stdev=18.76 00:13:45.492 clat percentiles (usec): 00:13:45.492 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 174], 00:13:45.492 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:13:45.492 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 210], 95.00th=[ 219], 00:13:45.492 | 99.00th=[ 237], 99.50th=[ 243], 99.90th=[ 273], 99.95th=[ 281], 00:13:45.492 | 99.99th=[ 293] 00:13:45.492 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:13:45.492 slat (usec): min=17, max=140, avg=24.97, stdev= 6.91 00:13:45.492 clat (usec): min=56, max=242, avg=113.69, stdev=14.02 00:13:45.492 lat (usec): min=108, max=383, avg=138.66, stdev=17.08 00:13:45.492 clat percentiles (usec): 00:13:45.492 | 1.00th=[ 93], 5.00th=[ 96], 10.00th=[ 99], 20.00th=[ 102], 00:13:45.492 | 30.00th=[ 105], 40.00th=[ 109], 50.00th=[ 112], 60.00th=[ 115], 00:13:45.492 | 70.00th=[ 119], 80.00th=[ 124], 90.00th=[ 133], 95.00th=[ 141], 00:13:45.492 | 99.00th=[ 157], 99.50th=[ 163], 99.90th=[ 174], 99.95th=[ 174], 00:13:45.492 | 99.99th=[ 243] 00:13:45.492 bw ( KiB/s): min=12263, max=12263, per=99.90%, avg=12263.00, stdev= 0.00, samples=1 00:13:45.492 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:13:45.492 lat (usec) : 100=7.54%, 250=92.32%, 500=0.14% 00:13:45.492 cpu : usr=2.60%, sys=10.00%, ctx=5811, majf=0, minf=5 00:13:45.492 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:45.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.492 issued rwts: total=2739,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.492 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:45.492 00:13:45.492 Run status group 0 (all jobs): 00:13:45.492 READ: bw=10.7MiB/s (11.2MB/s), 10.7MiB/s-10.7MiB/s (11.2MB/s-11.2MB/s), io=10.7MiB (11.2MB), run=1001-1001msec 00:13:45.492 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:13:45.492 00:13:45.492 Disk stats (read/write): 00:13:45.492 nvme0n1: ios=2610/2587, merge=0/0, ticks=529/309, in_queue=838, util=91.48% 00:13:45.492 05:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:45.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:45.493 05:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:45.493 05:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:13:45.493 05:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:45.493 05:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:45.493 05:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:45.493 05:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:45.493 05:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:13:45.493 05:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:45.493 05:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:45.493 05:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:45.493 05:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:13:45.493 05:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:45.493 05:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:13:45.493 05:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:45.493 05:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:45.493 rmmod nvme_tcp 00:13:45.493 rmmod nvme_fabrics 00:13:45.751 rmmod nvme_keyring 00:13:45.751 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:45.751 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:13:45.751 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:13:45.751 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 66127 ']' 00:13:45.751 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 66127 00:13:45.751 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 66127 ']' 00:13:45.751 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 66127 00:13:45.751 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:13:45.751 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:45.751 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66127 00:13:45.751 killing process with pid 66127 00:13:45.751 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:45.751 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:45.751 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66127' 00:13:45.751 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 66127 00:13:45.751 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 66127 00:13:45.751 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:45.751 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:45.751 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:45.751 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:13:45.751 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:13:45.751 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:13:45.751 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:45.752 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:45.752 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:45.752 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:45.752 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:45.752 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:46.009 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:46.009 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:46.009 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:46.009 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:46.009 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:46.009 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:46.009 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:46.009 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:46.009 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:46.009 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:46.009 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:46.009 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.009 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:46.009 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.009 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:13:46.009 00:13:46.009 real 0m5.309s 00:13:46.009 user 0m15.513s 00:13:46.009 sys 0m2.180s 00:13:46.009 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:46.009 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.009 ************************************ 00:13:46.009 END TEST nvmf_nmic 00:13:46.009 ************************************ 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:46.269 ************************************ 00:13:46.269 START TEST nvmf_fio_target 00:13:46.269 ************************************ 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:46.269 * Looking for test storage... 00:13:46.269 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:46.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.269 --rc genhtml_branch_coverage=1 00:13:46.269 --rc genhtml_function_coverage=1 00:13:46.269 --rc genhtml_legend=1 00:13:46.269 --rc geninfo_all_blocks=1 00:13:46.269 --rc geninfo_unexecuted_blocks=1 00:13:46.269 00:13:46.269 ' 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:46.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.269 --rc genhtml_branch_coverage=1 00:13:46.269 --rc genhtml_function_coverage=1 00:13:46.269 --rc genhtml_legend=1 00:13:46.269 --rc geninfo_all_blocks=1 00:13:46.269 --rc geninfo_unexecuted_blocks=1 00:13:46.269 00:13:46.269 ' 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:46.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.269 --rc genhtml_branch_coverage=1 00:13:46.269 --rc genhtml_function_coverage=1 00:13:46.269 --rc genhtml_legend=1 00:13:46.269 --rc geninfo_all_blocks=1 00:13:46.269 --rc geninfo_unexecuted_blocks=1 00:13:46.269 00:13:46.269 ' 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:46.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.269 --rc genhtml_branch_coverage=1 00:13:46.269 --rc genhtml_function_coverage=1 00:13:46.269 --rc genhtml_legend=1 00:13:46.269 --rc geninfo_all_blocks=1 00:13:46.269 --rc geninfo_unexecuted_blocks=1 00:13:46.269 00:13:46.269 ' 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:46.269 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:46.270 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:46.270 Cannot find device "nvmf_init_br" 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:46.270 Cannot find device "nvmf_init_br2" 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:46.270 Cannot find device "nvmf_tgt_br" 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:13:46.270 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:46.529 Cannot find device "nvmf_tgt_br2" 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:46.529 Cannot find device "nvmf_init_br" 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:46.529 Cannot find device "nvmf_init_br2" 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:46.529 Cannot find device "nvmf_tgt_br" 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:46.529 Cannot find device "nvmf_tgt_br2" 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:46.529 Cannot find device "nvmf_br" 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:46.529 Cannot find device "nvmf_init_if" 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:46.529 Cannot find device "nvmf_init_if2" 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:46.529 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:46.529 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:46.529 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:46.529 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:46.529 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:46.529 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:46.529 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:46.529 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:46.529 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:46.789 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:46.789 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:46.789 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:46.789 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:46.789 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:46.789 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:46.789 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:46.789 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:46.789 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:46.789 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:46.789 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:13:46.789 00:13:46.789 --- 10.0.0.3 ping statistics --- 00:13:46.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.789 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:13:46.789 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:46.789 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:46.789 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:13:46.789 00:13:46.789 --- 10.0.0.4 ping statistics --- 00:13:46.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.789 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:13:46.789 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:46.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:46.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:13:46.789 00:13:46.789 --- 10.0.0.1 ping statistics --- 00:13:46.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.790 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:13:46.790 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:46.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:46.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:13:46.790 00:13:46.790 --- 10.0.0.2 ping statistics --- 00:13:46.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.790 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:13:46.790 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:46.790 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:13:46.790 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:46.790 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:46.790 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:46.790 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:46.790 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:46.790 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:46.790 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:46.790 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:46.790 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:46.790 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:46.790 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.790 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66440 00:13:46.790 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66440 00:13:46.790 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:46.790 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 66440 ']' 00:13:46.790 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.790 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:46.790 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.790 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:46.790 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.790 [2024-11-20 05:24:01.214034] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:13:46.790 [2024-11-20 05:24:01.214141] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.047 [2024-11-20 05:24:01.368197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:47.047 [2024-11-20 05:24:01.409758] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.047 [2024-11-20 05:24:01.409820] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.047 [2024-11-20 05:24:01.409834] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:47.047 [2024-11-20 05:24:01.409844] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:47.047 [2024-11-20 05:24:01.409853] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.047 [2024-11-20 05:24:01.410739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.047 [2024-11-20 05:24:01.410839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:47.047 [2024-11-20 05:24:01.410919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:47.047 [2024-11-20 05:24:01.410930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.047 [2024-11-20 05:24:01.446447] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:47.047 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:47.047 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:13:47.047 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:47.048 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:47.048 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.048 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.048 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:47.614 [2024-11-20 05:24:01.868255] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.614 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:47.873 05:24:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:47.873 05:24:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:48.131 05:24:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:48.131 05:24:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:48.389 05:24:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:48.389 05:24:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:48.956 05:24:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:48.956 05:24:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:49.215 05:24:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:49.473 05:24:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:49.473 05:24:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:50.039 05:24:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:50.039 05:24:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:50.297 05:24:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:50.298 05:24:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:50.556 05:24:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:50.815 05:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:50.815 05:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:51.073 05:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:51.073 05:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:51.640 05:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:51.899 [2024-11-20 05:24:06.217369] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:51.899 05:24:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:52.158 05:24:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:52.416 05:24:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid=4bd82fc4-6e19-4d22-95c5-23a13095cd93 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:13:52.674 05:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:52.674 05:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:13:52.674 05:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:52.674 05:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:13:52.674 05:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:13:52.674 05:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:13:54.596 05:24:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:54.596 05:24:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:54.596 05:24:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:54.596 05:24:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:13:54.596 05:24:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:54.596 05:24:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:13:54.596 05:24:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:54.596 [global] 00:13:54.596 thread=1 00:13:54.596 invalidate=1 00:13:54.596 rw=write 00:13:54.596 time_based=1 00:13:54.596 runtime=1 00:13:54.596 ioengine=libaio 00:13:54.596 direct=1 00:13:54.596 bs=4096 00:13:54.596 iodepth=1 00:13:54.596 norandommap=0 00:13:54.596 numjobs=1 00:13:54.596 00:13:54.596 verify_dump=1 00:13:54.596 verify_backlog=512 00:13:54.596 verify_state_save=0 00:13:54.596 do_verify=1 00:13:54.596 verify=crc32c-intel 00:13:54.596 [job0] 00:13:54.596 filename=/dev/nvme0n1 00:13:54.596 [job1] 00:13:54.596 filename=/dev/nvme0n2 00:13:54.596 [job2] 00:13:54.596 filename=/dev/nvme0n3 00:13:54.596 [job3] 00:13:54.596 filename=/dev/nvme0n4 00:13:54.853 Could not set queue depth (nvme0n1) 00:13:54.853 Could not set queue depth (nvme0n2) 00:13:54.853 Could not set queue depth (nvme0n3) 00:13:54.853 Could not set queue depth (nvme0n4) 00:13:54.853 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:54.853 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:54.853 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:54.853 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:54.853 fio-3.35 00:13:54.853 Starting 4 threads 00:13:56.228 00:13:56.228 job0: (groupid=0, jobs=1): err= 0: pid=66634: Wed Nov 20 05:24:10 2024 00:13:56.228 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:13:56.228 slat (nsec): min=10869, max=69320, avg=15157.82, stdev=4446.62 00:13:56.228 clat (usec): min=135, max=457, avg=187.19, stdev=31.23 00:13:56.228 lat (usec): min=150, max=470, avg=202.35, stdev=31.99 00:13:56.228 clat percentiles (usec): 00:13:56.228 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 163], 00:13:56.228 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 186], 00:13:56.228 | 70.00th=[ 194], 80.00th=[ 208], 90.00th=[ 229], 95.00th=[ 249], 00:13:56.228 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 338], 99.95th=[ 347], 00:13:56.228 | 99.99th=[ 457] 00:13:56.228 write: IOPS=2996, BW=11.7MiB/s (12.3MB/s)(11.7MiB/1001msec); 0 zone resets 00:13:56.228 slat (usec): min=14, max=122, avg=23.43, stdev= 7.56 00:13:56.228 clat (usec): min=94, max=771, avg=134.05, stdev=25.45 00:13:56.228 lat (usec): min=111, max=797, avg=157.49, stdev=26.53 00:13:56.228 clat percentiles (usec): 00:13:56.228 | 1.00th=[ 105], 5.00th=[ 112], 10.00th=[ 116], 20.00th=[ 120], 00:13:56.228 | 30.00th=[ 124], 40.00th=[ 127], 50.00th=[ 131], 60.00th=[ 135], 00:13:56.228 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 155], 95.00th=[ 165], 00:13:56.228 | 99.00th=[ 194], 99.50th=[ 212], 99.90th=[ 553], 99.95th=[ 676], 00:13:56.228 | 99.99th=[ 775] 00:13:56.228 bw ( KiB/s): min=12288, max=12288, per=26.42%, avg=12288.00, stdev= 0.00, samples=1 00:13:56.228 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:56.228 lat (usec) : 100=0.09%, 250=97.59%, 500=2.27%, 750=0.04%, 1000=0.02% 00:13:56.228 cpu : usr=2.50%, sys=8.40%, ctx=5562, majf=0, minf=15 00:13:56.228 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:56.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.228 issued rwts: total=2560,2999,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.228 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:56.228 job1: (groupid=0, jobs=1): err= 0: pid=66635: Wed Nov 20 05:24:10 2024 00:13:56.228 read: IOPS=2797, BW=10.9MiB/s (11.5MB/s)(10.9MiB/1001msec) 00:13:56.228 slat (nsec): min=11021, max=55917, avg=15027.01, stdev=4914.84 00:13:56.228 clat (usec): min=134, max=850, avg=170.54, stdev=24.43 00:13:56.228 lat (usec): min=148, max=863, avg=185.57, stdev=25.54 00:13:56.228 clat percentiles (usec): 00:13:56.228 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:13:56.228 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:13:56.228 | 70.00th=[ 174], 80.00th=[ 182], 90.00th=[ 196], 95.00th=[ 215], 00:13:56.228 | 99.00th=[ 247], 99.50th=[ 258], 99.90th=[ 273], 99.95th=[ 281], 00:13:56.228 | 99.99th=[ 848] 00:13:56.228 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:13:56.228 slat (usec): min=14, max=101, avg=22.58, stdev= 7.03 00:13:56.228 clat (usec): min=92, max=1589, avg=130.51, stdev=32.20 00:13:56.228 lat (usec): min=110, max=1608, avg=153.09, stdev=33.34 00:13:56.228 clat percentiles (usec): 00:13:56.228 | 1.00th=[ 102], 5.00th=[ 109], 10.00th=[ 113], 20.00th=[ 117], 00:13:56.228 | 30.00th=[ 120], 40.00th=[ 123], 50.00th=[ 127], 60.00th=[ 130], 00:13:56.228 | 70.00th=[ 135], 80.00th=[ 143], 90.00th=[ 155], 95.00th=[ 167], 00:13:56.228 | 99.00th=[ 192], 99.50th=[ 204], 99.90th=[ 227], 99.95th=[ 322], 00:13:56.228 | 99.99th=[ 1598] 00:13:56.228 bw ( KiB/s): min=12288, max=12288, per=26.42%, avg=12288.00, stdev= 0.00, samples=1 00:13:56.228 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:56.228 lat (usec) : 100=0.31%, 250=99.25%, 500=0.41%, 1000=0.02% 00:13:56.228 lat (msec) : 2=0.02% 00:13:56.228 cpu : usr=2.70%, sys=8.50%, ctx=5873, majf=0, minf=5 00:13:56.228 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:56.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.228 issued rwts: total=2800,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.228 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:56.228 job2: (groupid=0, jobs=1): err= 0: pid=66636: Wed Nov 20 05:24:10 2024 00:13:56.228 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:13:56.228 slat (usec): min=11, max=252, avg=15.69, stdev= 6.42 00:13:56.228 clat (usec): min=145, max=2238, avg=192.56, stdev=55.90 00:13:56.228 lat (usec): min=158, max=2264, avg=208.25, stdev=56.51 00:13:56.228 clat percentiles (usec): 00:13:56.228 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:13:56.228 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:13:56.228 | 70.00th=[ 192], 80.00th=[ 204], 90.00th=[ 239], 95.00th=[ 255], 00:13:56.228 | 99.00th=[ 289], 99.50th=[ 388], 99.90th=[ 791], 99.95th=[ 832], 00:13:56.228 | 99.99th=[ 2245] 00:13:56.228 write: IOPS=2729, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1001msec); 0 zone resets 00:13:56.228 slat (usec): min=14, max=108, avg=23.88, stdev= 7.59 00:13:56.228 clat (usec): min=106, max=825, avg=143.56, stdev=30.10 00:13:56.228 lat (usec): min=125, max=850, avg=167.45, stdev=32.12 00:13:56.228 clat percentiles (usec): 00:13:56.228 | 1.00th=[ 115], 5.00th=[ 121], 10.00th=[ 124], 20.00th=[ 128], 00:13:56.228 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 141], 00:13:56.228 | 70.00th=[ 147], 80.00th=[ 153], 90.00th=[ 172], 95.00th=[ 190], 00:13:56.228 | 99.00th=[ 229], 99.50th=[ 269], 99.90th=[ 519], 99.95th=[ 529], 00:13:56.228 | 99.99th=[ 824] 00:13:56.228 bw ( KiB/s): min=12288, max=12288, per=26.42%, avg=12288.00, stdev= 0.00, samples=1 00:13:56.228 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:56.228 lat (usec) : 250=96.58%, 500=3.23%, 750=0.11%, 1000=0.06% 00:13:56.228 lat (msec) : 4=0.02% 00:13:56.228 cpu : usr=2.30%, sys=8.20%, ctx=5293, majf=0, minf=7 00:13:56.228 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:56.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.228 issued rwts: total=2560,2732,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.228 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:56.228 job3: (groupid=0, jobs=1): err= 0: pid=66637: Wed Nov 20 05:24:10 2024 00:13:56.228 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:13:56.228 slat (nsec): min=11358, max=61691, avg=14811.43, stdev=3933.46 00:13:56.228 clat (usec): min=147, max=474, avg=188.31, stdev=22.94 00:13:56.228 lat (usec): min=160, max=487, avg=203.12, stdev=23.44 00:13:56.228 clat percentiles (usec): 00:13:56.228 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:13:56.228 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:13:56.228 | 70.00th=[ 192], 80.00th=[ 200], 90.00th=[ 212], 95.00th=[ 239], 00:13:56.228 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[ 314], 99.95th=[ 326], 00:13:56.228 | 99.99th=[ 474] 00:13:56.228 write: IOPS=2831, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1001msec); 0 zone resets 00:13:56.228 slat (usec): min=15, max=122, avg=23.95, stdev= 7.78 00:13:56.228 clat (usec): min=110, max=1605, avg=142.14, stdev=32.01 00:13:56.228 lat (usec): min=129, max=1625, avg=166.08, stdev=33.20 00:13:56.228 clat percentiles (usec): 00:13:56.228 | 1.00th=[ 118], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 130], 00:13:56.228 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 143], 00:13:56.228 | 70.00th=[ 147], 80.00th=[ 153], 90.00th=[ 161], 95.00th=[ 172], 00:13:56.228 | 99.00th=[ 200], 99.50th=[ 210], 99.90th=[ 253], 99.95th=[ 334], 00:13:56.228 | 99.99th=[ 1614] 00:13:56.228 bw ( KiB/s): min=12288, max=12288, per=26.42%, avg=12288.00, stdev= 0.00, samples=1 00:13:56.228 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:56.228 lat (usec) : 250=98.41%, 500=1.58% 00:13:56.228 lat (msec) : 2=0.02% 00:13:56.228 cpu : usr=2.00%, sys=8.60%, ctx=5396, majf=0, minf=11 00:13:56.228 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:56.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.228 issued rwts: total=2560,2834,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.228 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:56.228 00:13:56.228 Run status group 0 (all jobs): 00:13:56.228 READ: bw=40.9MiB/s (42.9MB/s), 9.99MiB/s-10.9MiB/s (10.5MB/s-11.5MB/s), io=40.9MiB (42.9MB), run=1001-1001msec 00:13:56.229 WRITE: bw=45.4MiB/s (47.6MB/s), 10.7MiB/s-12.0MiB/s (11.2MB/s-12.6MB/s), io=45.5MiB (47.7MB), run=1001-1001msec 00:13:56.229 00:13:56.229 Disk stats (read/write): 00:13:56.229 nvme0n1: ios=2232/2560, merge=0/0, ticks=458/359, in_queue=817, util=89.58% 00:13:56.229 nvme0n2: ios=2544/2560, merge=0/0, ticks=491/361, in_queue=852, util=90.06% 00:13:56.229 nvme0n3: ios=2075/2540, merge=0/0, ticks=467/393, in_queue=860, util=89.89% 00:13:56.229 nvme0n4: ios=2138/2560, merge=0/0, ticks=411/380, in_queue=791, util=89.74% 00:13:56.229 05:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:56.229 [global] 00:13:56.229 thread=1 00:13:56.229 invalidate=1 00:13:56.229 rw=randwrite 00:13:56.229 time_based=1 00:13:56.229 runtime=1 00:13:56.229 ioengine=libaio 00:13:56.229 direct=1 00:13:56.229 bs=4096 00:13:56.229 iodepth=1 00:13:56.229 norandommap=0 00:13:56.229 numjobs=1 00:13:56.229 00:13:56.229 verify_dump=1 00:13:56.229 verify_backlog=512 00:13:56.229 verify_state_save=0 00:13:56.229 do_verify=1 00:13:56.229 verify=crc32c-intel 00:13:56.229 [job0] 00:13:56.229 filename=/dev/nvme0n1 00:13:56.229 [job1] 00:13:56.229 filename=/dev/nvme0n2 00:13:56.229 [job2] 00:13:56.229 filename=/dev/nvme0n3 00:13:56.229 [job3] 00:13:56.229 filename=/dev/nvme0n4 00:13:56.229 Could not set queue depth (nvme0n1) 00:13:56.229 Could not set queue depth (nvme0n2) 00:13:56.229 Could not set queue depth (nvme0n3) 00:13:56.229 Could not set queue depth (nvme0n4) 00:13:56.229 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:56.229 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:56.229 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:56.229 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:56.229 fio-3.35 00:13:56.229 Starting 4 threads 00:13:57.604 00:13:57.604 job0: (groupid=0, jobs=1): err= 0: pid=66690: Wed Nov 20 05:24:11 2024 00:13:57.604 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:13:57.604 slat (nsec): min=10936, max=48700, avg=17483.85, stdev=6267.14 00:13:57.604 clat (usec): min=130, max=772, avg=183.00, stdev=36.61 00:13:57.604 lat (usec): min=142, max=795, avg=200.48, stdev=40.09 00:13:57.604 clat percentiles (usec): 00:13:57.604 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:13:57.604 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 180], 00:13:57.604 | 70.00th=[ 198], 80.00th=[ 215], 90.00th=[ 225], 95.00th=[ 235], 00:13:57.604 | 99.00th=[ 260], 99.50th=[ 371], 99.90th=[ 537], 99.95th=[ 611], 00:13:57.604 | 99.99th=[ 775] 00:13:57.604 write: IOPS=2730, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1001msec); 0 zone resets 00:13:57.604 slat (usec): min=14, max=114, avg=29.78, stdev= 9.45 00:13:57.604 clat (usec): min=91, max=2828, avg=143.45, stdev=61.81 00:13:57.604 lat (usec): min=117, max=2864, avg=173.23, stdev=64.53 00:13:57.604 clat percentiles (usec): 00:13:57.604 | 1.00th=[ 103], 5.00th=[ 111], 10.00th=[ 115], 20.00th=[ 120], 00:13:57.604 | 30.00th=[ 124], 40.00th=[ 130], 50.00th=[ 137], 60.00th=[ 147], 00:13:57.604 | 70.00th=[ 155], 80.00th=[ 165], 90.00th=[ 176], 95.00th=[ 184], 00:13:57.604 | 99.00th=[ 200], 99.50th=[ 219], 99.90th=[ 824], 99.95th=[ 930], 00:13:57.604 | 99.99th=[ 2835] 00:13:57.604 bw ( KiB/s): min=12288, max=12288, per=29.39%, avg=12288.00, stdev= 0.00, samples=1 00:13:57.604 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:57.604 lat (usec) : 100=0.19%, 250=98.85%, 500=0.79%, 750=0.09%, 1000=0.06% 00:13:57.604 lat (msec) : 4=0.02% 00:13:57.604 cpu : usr=2.90%, sys=10.20%, ctx=5293, majf=0, minf=19 00:13:57.604 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:57.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.604 issued rwts: total=2560,2733,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:57.604 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:57.604 job1: (groupid=0, jobs=1): err= 0: pid=66691: Wed Nov 20 05:24:11 2024 00:13:57.604 read: IOPS=2147, BW=8591KiB/s (8798kB/s)(8600KiB/1001msec) 00:13:57.604 slat (nsec): min=8178, max=52013, avg=17502.00, stdev=6440.85 00:13:57.604 clat (usec): min=135, max=3298, avg=210.61, stdev=128.62 00:13:57.604 lat (usec): min=148, max=3313, avg=228.12, stdev=128.03 00:13:57.604 clat percentiles (usec): 00:13:57.604 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:13:57.604 | 30.00th=[ 169], 40.00th=[ 178], 50.00th=[ 188], 60.00th=[ 212], 00:13:57.604 | 70.00th=[ 233], 80.00th=[ 251], 90.00th=[ 269], 95.00th=[ 285], 00:13:57.604 | 99.00th=[ 363], 99.50th=[ 416], 99.90th=[ 2573], 99.95th=[ 3294], 00:13:57.604 | 99.99th=[ 3294] 00:13:57.604 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:13:57.604 slat (usec): min=14, max=108, avg=28.37, stdev= 9.98 00:13:57.604 clat (usec): min=95, max=722, avg=166.25, stdev=53.51 00:13:57.604 lat (usec): min=117, max=742, avg=194.63, stdev=52.36 00:13:57.604 clat percentiles (usec): 00:13:57.604 | 1.00th=[ 104], 5.00th=[ 112], 10.00th=[ 117], 20.00th=[ 121], 00:13:57.604 | 30.00th=[ 126], 40.00th=[ 131], 50.00th=[ 139], 60.00th=[ 159], 00:13:57.604 | 70.00th=[ 208], 80.00th=[ 225], 90.00th=[ 241], 95.00th=[ 253], 00:13:57.604 | 99.00th=[ 289], 99.50th=[ 314], 99.90th=[ 363], 99.95th=[ 379], 00:13:57.604 | 99.99th=[ 725] 00:13:57.604 bw ( KiB/s): min=12288, max=12288, per=29.39%, avg=12288.00, stdev= 0.00, samples=1 00:13:57.604 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:57.604 lat (usec) : 100=0.21%, 250=87.60%, 500=12.02%, 750=0.06% 00:13:57.604 lat (msec) : 2=0.02%, 4=0.08% 00:13:57.604 cpu : usr=2.90%, sys=8.60%, ctx=4710, majf=0, minf=11 00:13:57.604 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:57.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.604 issued rwts: total=2150,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:57.604 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:57.604 job2: (groupid=0, jobs=1): err= 0: pid=66692: Wed Nov 20 05:24:11 2024 00:13:57.604 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:13:57.604 slat (usec): min=11, max=100, avg=17.87, stdev= 7.16 00:13:57.604 clat (usec): min=124, max=677, avg=182.73, stdev=25.90 00:13:57.604 lat (usec): min=156, max=692, avg=200.60, stdev=27.48 00:13:57.604 clat percentiles (usec): 00:13:57.604 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:13:57.604 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:13:57.604 | 70.00th=[ 186], 80.00th=[ 194], 90.00th=[ 210], 95.00th=[ 241], 00:13:57.604 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 302], 99.95th=[ 408], 00:13:57.604 | 99.99th=[ 676] 00:13:57.604 write: IOPS=2885, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1001msec); 0 zone resets 00:13:57.604 slat (nsec): min=15232, max=76542, avg=27970.93, stdev=9710.80 00:13:57.604 clat (usec): min=99, max=3053, avg=136.29, stdev=58.90 00:13:57.604 lat (usec): min=118, max=3095, avg=164.26, stdev=60.33 00:13:57.604 clat percentiles (usec): 00:13:57.604 | 1.00th=[ 109], 5.00th=[ 116], 10.00th=[ 120], 20.00th=[ 125], 00:13:57.604 | 30.00th=[ 128], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 137], 00:13:57.604 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 157], 00:13:57.604 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 562], 99.95th=[ 963], 00:13:57.604 | 99.99th=[ 3064] 00:13:57.604 bw ( KiB/s): min=12288, max=12288, per=29.39%, avg=12288.00, stdev= 0.00, samples=1 00:13:57.604 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:57.604 lat (usec) : 100=0.02%, 250=98.40%, 500=1.49%, 750=0.06%, 1000=0.02% 00:13:57.604 lat (msec) : 4=0.02% 00:13:57.604 cpu : usr=2.50%, sys=10.20%, ctx=5459, majf=0, minf=9 00:13:57.604 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:57.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.604 issued rwts: total=2560,2888,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:57.604 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:57.604 job3: (groupid=0, jobs=1): err= 0: pid=66693: Wed Nov 20 05:24:11 2024 00:13:57.604 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:13:57.604 slat (nsec): min=9802, max=42214, avg=15991.53, stdev=4040.33 00:13:57.604 clat (usec): min=157, max=641, avg=236.33, stdev=35.15 00:13:57.604 lat (usec): min=176, max=657, avg=252.32, stdev=35.58 00:13:57.604 clat percentiles (usec): 00:13:57.604 | 1.00th=[ 167], 5.00th=[ 180], 10.00th=[ 192], 20.00th=[ 215], 00:13:57.605 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 243], 00:13:57.605 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 289], 00:13:57.605 | 99.00th=[ 343], 99.50th=[ 355], 99.90th=[ 424], 99.95th=[ 586], 00:13:57.605 | 99.99th=[ 644] 00:13:57.605 write: IOPS=2278, BW=9115KiB/s (9334kB/s)(9124KiB/1001msec); 0 zone resets 00:13:57.605 slat (nsec): min=12481, max=80023, avg=23938.36, stdev=7115.50 00:13:57.605 clat (usec): min=112, max=568, avg=184.05, stdev=43.19 00:13:57.605 lat (usec): min=135, max=590, avg=207.98, stdev=44.15 00:13:57.605 clat percentiles (usec): 00:13:57.605 | 1.00th=[ 118], 5.00th=[ 126], 10.00th=[ 131], 20.00th=[ 143], 00:13:57.605 | 30.00th=[ 157], 40.00th=[ 169], 50.00th=[ 180], 60.00th=[ 194], 00:13:57.605 | 70.00th=[ 208], 80.00th=[ 221], 90.00th=[ 239], 95.00th=[ 253], 00:13:57.605 | 99.00th=[ 289], 99.50th=[ 351], 99.90th=[ 429], 99.95th=[ 449], 00:13:57.605 | 99.99th=[ 570] 00:13:57.605 bw ( KiB/s): min=10664, max=10664, per=25.51%, avg=10664.00, stdev= 0.00, samples=1 00:13:57.605 iops : min= 2666, max= 2666, avg=2666.00, stdev= 0.00, samples=1 00:13:57.605 lat (usec) : 250=82.98%, 500=16.96%, 750=0.07% 00:13:57.605 cpu : usr=1.90%, sys=7.40%, ctx=4329, majf=0, minf=9 00:13:57.605 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:57.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.605 issued rwts: total=2048,2281,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:57.605 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:57.605 00:13:57.605 Run status group 0 (all jobs): 00:13:57.605 READ: bw=36.4MiB/s (38.1MB/s), 8184KiB/s-9.99MiB/s (8380kB/s-10.5MB/s), io=36.4MiB (38.2MB), run=1001-1001msec 00:13:57.605 WRITE: bw=40.8MiB/s (42.8MB/s), 9115KiB/s-11.3MiB/s (9334kB/s-11.8MB/s), io=40.9MiB (42.9MB), run=1001-1001msec 00:13:57.605 00:13:57.605 Disk stats (read/write): 00:13:57.605 nvme0n1: ios=2098/2499, merge=0/0, ticks=402/387, in_queue=789, util=88.68% 00:13:57.605 nvme0n2: ios=2096/2146, merge=0/0, ticks=456/352, in_queue=808, util=88.56% 00:13:57.605 nvme0n3: ios=2161/2560, merge=0/0, ticks=416/378, in_queue=794, util=89.29% 00:13:57.605 nvme0n4: ios=1772/2048, merge=0/0, ticks=417/375, in_queue=792, util=89.75% 00:13:57.605 05:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:57.605 [global] 00:13:57.605 thread=1 00:13:57.605 invalidate=1 00:13:57.605 rw=write 00:13:57.605 time_based=1 00:13:57.605 runtime=1 00:13:57.605 ioengine=libaio 00:13:57.605 direct=1 00:13:57.605 bs=4096 00:13:57.605 iodepth=128 00:13:57.605 norandommap=0 00:13:57.605 numjobs=1 00:13:57.605 00:13:57.605 verify_dump=1 00:13:57.605 verify_backlog=512 00:13:57.605 verify_state_save=0 00:13:57.605 do_verify=1 00:13:57.605 verify=crc32c-intel 00:13:57.605 [job0] 00:13:57.605 filename=/dev/nvme0n1 00:13:57.605 [job1] 00:13:57.605 filename=/dev/nvme0n2 00:13:57.605 [job2] 00:13:57.605 filename=/dev/nvme0n3 00:13:57.605 [job3] 00:13:57.605 filename=/dev/nvme0n4 00:13:57.605 Could not set queue depth (nvme0n1) 00:13:57.605 Could not set queue depth (nvme0n2) 00:13:57.605 Could not set queue depth (nvme0n3) 00:13:57.605 Could not set queue depth (nvme0n4) 00:13:57.605 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:57.605 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:57.605 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:57.605 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:57.605 fio-3.35 00:13:57.605 Starting 4 threads 00:13:58.980 00:13:58.980 job0: (groupid=0, jobs=1): err= 0: pid=66748: Wed Nov 20 05:24:13 2024 00:13:58.980 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:13:58.980 slat (usec): min=5, max=13024, avg=173.10, stdev=912.68 00:13:58.980 clat (usec): min=12395, max=48411, avg=21122.77, stdev=5083.20 00:13:58.980 lat (usec): min=12413, max=48421, avg=21295.86, stdev=5161.18 00:13:58.980 clat percentiles (usec): 00:13:58.980 | 1.00th=[13829], 5.00th=[15401], 10.00th=[16581], 20.00th=[18482], 00:13:58.980 | 30.00th=[18744], 40.00th=[19006], 50.00th=[19268], 60.00th=[19792], 00:13:58.980 | 70.00th=[21627], 80.00th=[23987], 90.00th=[27132], 95.00th=[28705], 00:13:58.980 | 99.00th=[43779], 99.50th=[45351], 99.90th=[48497], 99.95th=[48497], 00:13:58.980 | 99.99th=[48497] 00:13:58.980 write: IOPS=2840, BW=11.1MiB/s (11.6MB/s)(11.2MiB/1005msec); 0 zone resets 00:13:58.980 slat (usec): min=11, max=8097, avg=188.79, stdev=767.47 00:13:58.980 clat (usec): min=3919, max=59808, avg=25491.16, stdev=13436.25 00:13:58.980 lat (usec): min=4892, max=59834, avg=25679.94, stdev=13518.56 00:13:58.980 clat percentiles (usec): 00:13:58.980 | 1.00th=[10683], 5.00th=[12125], 10.00th=[12518], 20.00th=[13042], 00:13:58.980 | 30.00th=[15270], 40.00th=[15926], 50.00th=[17695], 60.00th=[31327], 00:13:58.980 | 70.00th=[34341], 80.00th=[39060], 90.00th=[45876], 95.00th=[49546], 00:13:58.980 | 99.00th=[55313], 99.50th=[55837], 99.90th=[60031], 99.95th=[60031], 00:13:58.980 | 99.99th=[60031] 00:13:58.980 bw ( KiB/s): min= 8264, max=13560, per=17.87%, avg=10912.00, stdev=3744.84, samples=2 00:13:58.980 iops : min= 2066, max= 3390, avg=2728.00, stdev=936.21, samples=2 00:13:58.980 lat (msec) : 4=0.02%, 10=0.42%, 20=56.95%, 50=40.54%, 100=2.07% 00:13:58.980 cpu : usr=1.79%, sys=6.97%, ctx=278, majf=0, minf=11 00:13:58.980 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:58.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:58.980 issued rwts: total=2560,2855,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.980 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:58.980 job1: (groupid=0, jobs=1): err= 0: pid=66750: Wed Nov 20 05:24:13 2024 00:13:58.980 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:13:58.980 slat (usec): min=3, max=13224, avg=175.86, stdev=999.38 00:13:58.980 clat (usec): min=11380, max=52343, avg=21810.34, stdev=7650.70 00:13:58.980 lat (usec): min=13735, max=52350, avg=21986.20, stdev=7661.30 00:13:58.980 clat percentiles (usec): 00:13:58.980 | 1.00th=[13566], 5.00th=[14615], 10.00th=[15664], 20.00th=[16909], 00:13:58.980 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17695], 60.00th=[22676], 00:13:58.980 | 70.00th=[24511], 80.00th=[25297], 90.00th=[31065], 95.00th=[39584], 00:13:58.980 | 99.00th=[52167], 99.50th=[52167], 99.90th=[52167], 99.95th=[52167], 00:13:58.980 | 99.99th=[52167] 00:13:58.980 write: IOPS=3326, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1001msec); 0 zone resets 00:13:58.980 slat (usec): min=10, max=7414, avg=130.67, stdev=666.70 00:13:58.980 clat (usec): min=951, max=46408, avg=17749.47, stdev=6525.89 00:13:58.980 lat (usec): min=992, max=46423, avg=17880.13, stdev=6517.14 00:13:58.980 clat percentiles (usec): 00:13:58.980 | 1.00th=[ 4490], 5.00th=[12256], 10.00th=[12911], 20.00th=[13304], 00:13:58.980 | 30.00th=[13698], 40.00th=[14222], 50.00th=[16188], 60.00th=[17171], 00:13:58.981 | 70.00th=[19268], 80.00th=[21103], 90.00th=[26608], 95.00th=[31851], 00:13:58.981 | 99.00th=[41157], 99.50th=[46400], 99.90th=[46400], 99.95th=[46400], 00:13:58.981 | 99.99th=[46400] 00:13:58.981 bw ( KiB/s): min=12263, max=12263, per=20.08%, avg=12263.00, stdev= 0.00, samples=1 00:13:58.981 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:13:58.981 lat (usec) : 1000=0.02% 00:13:58.981 lat (msec) : 2=0.02%, 4=0.20%, 10=0.80%, 20=64.89%, 50=33.60% 00:13:58.981 lat (msec) : 100=0.48% 00:13:58.981 cpu : usr=2.20%, sys=9.50%, ctx=202, majf=0, minf=13 00:13:58.981 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:13:58.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.981 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:58.981 issued rwts: total=3072,3330,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.981 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:58.981 job2: (groupid=0, jobs=1): err= 0: pid=66755: Wed Nov 20 05:24:13 2024 00:13:58.981 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:13:58.981 slat (usec): min=6, max=4908, avg=111.91, stdev=552.73 00:13:58.981 clat (usec): min=9845, max=19542, avg=14832.83, stdev=1766.18 00:13:58.981 lat (usec): min=12312, max=19556, avg=14944.74, stdev=1692.88 00:13:58.981 clat percentiles (usec): 00:13:58.981 | 1.00th=[10814], 5.00th=[12649], 10.00th=[12911], 20.00th=[13173], 00:13:58.981 | 30.00th=[13566], 40.00th=[14091], 50.00th=[14746], 60.00th=[15270], 00:13:58.981 | 70.00th=[15664], 80.00th=[16450], 90.00th=[17171], 95.00th=[17957], 00:13:58.981 | 99.00th=[19006], 99.50th=[19530], 99.90th=[19530], 99.95th=[19530], 00:13:58.981 | 99.99th=[19530] 00:13:58.981 write: IOPS=4543, BW=17.7MiB/s (18.6MB/s)(17.8MiB/1002msec); 0 zone resets 00:13:58.981 slat (usec): min=10, max=4856, avg=111.66, stdev=501.65 00:13:58.981 clat (usec): min=1231, max=21381, avg=14445.84, stdev=2362.79 00:13:58.981 lat (usec): min=1249, max=21399, avg=14557.50, stdev=2321.99 00:13:58.981 clat percentiles (usec): 00:13:58.981 | 1.00th=[ 6980], 5.00th=[12256], 10.00th=[12518], 20.00th=[12911], 00:13:58.981 | 30.00th=[13304], 40.00th=[13960], 50.00th=[14353], 60.00th=[14746], 00:13:58.981 | 70.00th=[15008], 80.00th=[15533], 90.00th=[16909], 95.00th=[19268], 00:13:58.981 | 99.00th=[21103], 99.50th=[21365], 99.90th=[21365], 99.95th=[21365], 00:13:58.981 | 99.99th=[21365] 00:13:58.981 bw ( KiB/s): min=17416, max=17992, per=28.99%, avg=17704.00, stdev=407.29, samples=2 00:13:58.981 iops : min= 4354, max= 4498, avg=4426.00, stdev=101.82, samples=2 00:13:58.981 lat (msec) : 2=0.10%, 4=0.08%, 10=0.86%, 20=96.65%, 50=2.31% 00:13:58.981 cpu : usr=2.70%, sys=13.09%, ctx=271, majf=0, minf=11 00:13:58.981 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:58.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.981 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:58.981 issued rwts: total=4096,4553,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.981 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:58.981 job3: (groupid=0, jobs=1): err= 0: pid=66756: Wed Nov 20 05:24:13 2024 00:13:58.981 read: IOPS=4531, BW=17.7MiB/s (18.6MB/s)(17.8MiB/1003msec) 00:13:58.981 slat (usec): min=5, max=4744, avg=105.54, stdev=513.21 00:13:58.981 clat (usec): min=310, max=18787, avg=13757.66, stdev=1754.01 00:13:58.981 lat (usec): min=2924, max=19033, avg=13863.20, stdev=1688.76 00:13:58.981 clat percentiles (usec): 00:13:58.981 | 1.00th=[ 6194], 5.00th=[12518], 10.00th=[12780], 20.00th=[13042], 00:13:58.981 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13566], 60.00th=[13698], 00:13:58.981 | 70.00th=[13960], 80.00th=[14484], 90.00th=[15533], 95.00th=[16909], 00:13:58.981 | 99.00th=[18220], 99.50th=[18482], 99.90th=[18744], 99.95th=[18744], 00:13:58.981 | 99.99th=[18744] 00:13:58.981 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:13:58.981 slat (usec): min=8, max=5602, avg=105.53, stdev=473.69 00:13:58.981 clat (usec): min=9661, max=20663, avg=13895.82, stdev=1886.47 00:13:58.981 lat (usec): min=10984, max=20674, avg=14001.35, stdev=1836.94 00:13:58.981 clat percentiles (usec): 00:13:58.981 | 1.00th=[10421], 5.00th=[12125], 10.00th=[12387], 20.00th=[12649], 00:13:58.981 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13304], 60.00th=[13698], 00:13:58.981 | 70.00th=[13960], 80.00th=[15139], 90.00th=[16450], 95.00th=[18220], 00:13:58.981 | 99.00th=[20055], 99.50th=[20579], 99.90th=[20579], 99.95th=[20579], 00:13:58.981 | 99.99th=[20579] 00:13:58.981 bw ( KiB/s): min=18168, max=18696, per=30.18%, avg=18432.00, stdev=373.35, samples=2 00:13:58.981 iops : min= 4542, max= 4674, avg=4608.00, stdev=93.34, samples=2 00:13:58.981 lat (usec) : 500=0.01% 00:13:58.981 lat (msec) : 4=0.35%, 10=0.86%, 20=97.77%, 50=1.01% 00:13:58.981 cpu : usr=4.39%, sys=12.28%, ctx=288, majf=0, minf=15 00:13:58.981 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:13:58.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.981 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:58.981 issued rwts: total=4545,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.981 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:58.981 00:13:58.981 Run status group 0 (all jobs): 00:13:58.981 READ: bw=55.5MiB/s (58.2MB/s), 9.95MiB/s-17.7MiB/s (10.4MB/s-18.6MB/s), io=55.8MiB (58.5MB), run=1001-1005msec 00:13:58.981 WRITE: bw=59.6MiB/s (62.5MB/s), 11.1MiB/s-17.9MiB/s (11.6MB/s-18.8MB/s), io=59.9MiB (62.9MB), run=1001-1005msec 00:13:58.981 00:13:58.981 Disk stats (read/write): 00:13:58.981 nvme0n1: ios=2160/2560, merge=0/0, ticks=22495/28904, in_queue=51399, util=88.28% 00:13:58.981 nvme0n2: ios=2608/2816, merge=0/0, ticks=13999/10874, in_queue=24873, util=88.37% 00:13:58.981 nvme0n3: ios=3584/3712, merge=0/0, ticks=12407/12118, in_queue=24525, util=88.66% 00:13:58.981 nvme0n4: ios=3712/4096, merge=0/0, ticks=11851/12102, in_queue=23953, util=89.68% 00:13:58.981 05:24:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:58.981 [global] 00:13:58.981 thread=1 00:13:58.981 invalidate=1 00:13:58.981 rw=randwrite 00:13:58.981 time_based=1 00:13:58.981 runtime=1 00:13:58.981 ioengine=libaio 00:13:58.981 direct=1 00:13:58.981 bs=4096 00:13:58.981 iodepth=128 00:13:58.981 norandommap=0 00:13:58.981 numjobs=1 00:13:58.981 00:13:58.981 verify_dump=1 00:13:58.981 verify_backlog=512 00:13:58.981 verify_state_save=0 00:13:58.981 do_verify=1 00:13:58.981 verify=crc32c-intel 00:13:58.981 [job0] 00:13:58.981 filename=/dev/nvme0n1 00:13:58.981 [job1] 00:13:58.981 filename=/dev/nvme0n2 00:13:58.981 [job2] 00:13:58.981 filename=/dev/nvme0n3 00:13:58.981 [job3] 00:13:58.981 filename=/dev/nvme0n4 00:13:58.981 Could not set queue depth (nvme0n1) 00:13:58.981 Could not set queue depth (nvme0n2) 00:13:58.981 Could not set queue depth (nvme0n3) 00:13:58.981 Could not set queue depth (nvme0n4) 00:13:58.981 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:58.981 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:58.981 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:58.981 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:58.981 fio-3.35 00:13:58.981 Starting 4 threads 00:14:00.355 00:14:00.355 job0: (groupid=0, jobs=1): err= 0: pid=66810: Wed Nov 20 05:24:14 2024 00:14:00.355 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:14:00.355 slat (usec): min=6, max=4361, avg=86.62, stdev=367.67 00:14:00.355 clat (usec): min=6928, max=15788, avg=11500.90, stdev=916.13 00:14:00.355 lat (usec): min=7530, max=15802, avg=11587.53, stdev=911.47 00:14:00.355 clat percentiles (usec): 00:14:00.355 | 1.00th=[ 8848], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[11207], 00:14:00.355 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11469], 60.00th=[11600], 00:14:00.355 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12387], 95.00th=[13042], 00:14:00.355 | 99.00th=[14353], 99.50th=[14877], 99.90th=[15401], 99.95th=[15533], 00:14:00.355 | 99.99th=[15795] 00:14:00.355 write: IOPS=5834, BW=22.8MiB/s (23.9MB/s)(22.8MiB/1001msec); 0 zone resets 00:14:00.355 slat (usec): min=10, max=4700, avg=80.16, stdev=432.14 00:14:00.355 clat (usec): min=486, max=15939, avg=10601.07, stdev=1199.39 00:14:00.355 lat (usec): min=3959, max=15986, avg=10681.23, stdev=1257.91 00:14:00.355 clat percentiles (usec): 00:14:00.355 | 1.00th=[ 5735], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10159], 00:14:00.355 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:14:00.355 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11600], 95.00th=[11994], 00:14:00.355 | 99.00th=[14222], 99.50th=[14746], 99.90th=[15533], 99.95th=[15926], 00:14:00.355 | 99.99th=[15926] 00:14:00.355 bw ( KiB/s): min=24526, max=24526, per=38.28%, avg=24526.00, stdev= 0.00, samples=1 00:14:00.355 iops : min= 6131, max= 6131, avg=6131.00, stdev= 0.00, samples=1 00:14:00.355 lat (usec) : 500=0.01% 00:14:00.355 lat (msec) : 4=0.02%, 10=11.18%, 20=88.79% 00:14:00.355 cpu : usr=6.20%, sys=13.90%, ctx=411, majf=0, minf=11 00:14:00.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:14:00.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:00.356 issued rwts: total=5632,5840,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:00.356 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:00.356 job1: (groupid=0, jobs=1): err= 0: pid=66811: Wed Nov 20 05:24:14 2024 00:14:00.356 read: IOPS=2266, BW=9065KiB/s (9282kB/s)(9092KiB/1003msec) 00:14:00.356 slat (usec): min=5, max=10446, avg=220.91, stdev=1172.34 00:14:00.356 clat (usec): min=272, max=39887, avg=26389.91, stdev=6597.86 00:14:00.356 lat (usec): min=2805, max=39906, avg=26610.82, stdev=6549.59 00:14:00.356 clat percentiles (usec): 00:14:00.356 | 1.00th=[ 3064], 5.00th=[19006], 10.00th=[20841], 20.00th=[22414], 00:14:00.356 | 30.00th=[24249], 40.00th=[24773], 50.00th=[25035], 60.00th=[25560], 00:14:00.356 | 70.00th=[27132], 80.00th=[30802], 90.00th=[36963], 95.00th=[38536], 00:14:00.356 | 99.00th=[39584], 99.50th=[39584], 99.90th=[40109], 99.95th=[40109], 00:14:00.356 | 99.99th=[40109] 00:14:00.356 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:14:00.356 slat (usec): min=11, max=7594, avg=187.01, stdev=919.88 00:14:00.356 clat (usec): min=15394, max=36216, avg=25880.95, stdev=3266.83 00:14:00.356 lat (usec): min=19726, max=36267, avg=26067.96, stdev=3128.17 00:14:00.356 clat percentiles (usec): 00:14:00.356 | 1.00th=[18744], 5.00th=[20579], 10.00th=[22938], 20.00th=[23200], 00:14:00.356 | 30.00th=[23462], 40.00th=[23987], 50.00th=[24511], 60.00th=[27132], 00:14:00.356 | 70.00th=[27919], 80.00th=[28443], 90.00th=[29754], 95.00th=[31327], 00:14:00.356 | 99.00th=[35914], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:14:00.356 | 99.99th=[36439] 00:14:00.356 bw ( KiB/s): min= 8456, max=12024, per=15.98%, avg=10240.00, stdev=2522.96, samples=2 00:14:00.356 iops : min= 2114, max= 3006, avg=2560.00, stdev=630.74, samples=2 00:14:00.356 lat (usec) : 500=0.02% 00:14:00.356 lat (msec) : 4=0.66%, 10=0.66%, 20=2.40%, 50=96.25% 00:14:00.356 cpu : usr=2.79%, sys=7.09%, ctx=153, majf=0, minf=7 00:14:00.356 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:14:00.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:00.356 issued rwts: total=2273,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:00.356 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:00.356 job2: (groupid=0, jobs=1): err= 0: pid=66812: Wed Nov 20 05:24:14 2024 00:14:00.356 read: IOPS=4732, BW=18.5MiB/s (19.4MB/s)(18.5MiB/1001msec) 00:14:00.356 slat (usec): min=6, max=3124, avg=98.86, stdev=475.20 00:14:00.356 clat (usec): min=339, max=14341, avg=12973.91, stdev=1182.88 00:14:00.356 lat (usec): min=2887, max=14365, avg=13072.78, stdev=1084.37 00:14:00.356 clat percentiles (usec): 00:14:00.356 | 1.00th=[ 6390], 5.00th=[11076], 10.00th=[12649], 20.00th=[12911], 00:14:00.356 | 30.00th=[13042], 40.00th=[13042], 50.00th=[13173], 60.00th=[13304], 00:14:00.356 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13698], 95.00th=[13698], 00:14:00.356 | 99.00th=[13960], 99.50th=[13960], 99.90th=[14353], 99.95th=[14353], 00:14:00.356 | 99.99th=[14353] 00:14:00.356 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:14:00.356 slat (usec): min=8, max=3660, avg=96.98, stdev=425.52 00:14:00.356 clat (usec): min=9197, max=17372, avg=12656.21, stdev=942.03 00:14:00.356 lat (usec): min=11403, max=17421, avg=12753.20, stdev=848.11 00:14:00.356 clat percentiles (usec): 00:14:00.356 | 1.00th=[10028], 5.00th=[11731], 10.00th=[11994], 20.00th=[12256], 00:14:00.356 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12649], 60.00th=[12649], 00:14:00.356 | 70.00th=[12780], 80.00th=[12911], 90.00th=[13042], 95.00th=[14222], 00:14:00.356 | 99.00th=[16450], 99.50th=[16909], 99.90th=[17171], 99.95th=[17171], 00:14:00.356 | 99.99th=[17433] 00:14:00.356 bw ( KiB/s): min=20480, max=20480, per=31.97%, avg=20480.00, stdev= 0.00, samples=1 00:14:00.356 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:14:00.356 lat (usec) : 500=0.01% 00:14:00.356 lat (msec) : 4=0.32%, 10=1.08%, 20=98.59% 00:14:00.356 cpu : usr=4.00%, sys=12.60%, ctx=346, majf=0, minf=13 00:14:00.356 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:14:00.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:00.356 issued rwts: total=4737,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:00.356 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:00.356 job3: (groupid=0, jobs=1): err= 0: pid=66813: Wed Nov 20 05:24:14 2024 00:14:00.356 read: IOPS=2232, BW=8928KiB/s (9143kB/s)(8964KiB/1004msec) 00:14:00.356 slat (usec): min=7, max=9966, avg=214.31, stdev=1122.32 00:14:00.356 clat (usec): min=1793, max=40089, avg=26914.18, stdev=5884.20 00:14:00.356 lat (usec): min=7063, max=40106, avg=27128.50, stdev=5815.42 00:14:00.356 clat percentiles (usec): 00:14:00.356 | 1.00th=[ 7439], 5.00th=[19530], 10.00th=[23987], 20.00th=[24249], 00:14:00.356 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:14:00.356 | 70.00th=[26346], 80.00th=[31327], 90.00th=[36439], 95.00th=[38011], 00:14:00.356 | 99.00th=[40109], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109], 00:14:00.356 | 99.99th=[40109] 00:14:00.356 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:14:00.356 slat (usec): min=11, max=9240, avg=195.26, stdev=958.68 00:14:00.356 clat (usec): min=17680, max=33442, avg=25501.82, stdev=2644.35 00:14:00.356 lat (usec): min=22609, max=33457, avg=25697.08, stdev=2476.21 00:14:00.356 clat percentiles (usec): 00:14:00.356 | 1.00th=[18744], 5.00th=[22938], 10.00th=[22938], 20.00th=[23462], 00:14:00.356 | 30.00th=[23462], 40.00th=[23987], 50.00th=[24511], 60.00th=[25822], 00:14:00.356 | 70.00th=[27395], 80.00th=[28181], 90.00th=[28967], 95.00th=[29754], 00:14:00.356 | 99.00th=[33424], 99.50th=[33424], 99.90th=[33424], 99.95th=[33424], 00:14:00.356 | 99.99th=[33424] 00:14:00.356 bw ( KiB/s): min= 8200, max=12280, per=15.98%, avg=10240.00, stdev=2885.00, samples=2 00:14:00.356 iops : min= 2050, max= 3070, avg=2560.00, stdev=721.25, samples=2 00:14:00.356 lat (msec) : 2=0.02%, 10=0.67%, 20=3.19%, 50=96.13% 00:14:00.356 cpu : usr=2.29%, sys=7.58%, ctx=153, majf=0, minf=19 00:14:00.356 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:14:00.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:00.356 issued rwts: total=2241,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:00.356 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:00.356 00:14:00.356 Run status group 0 (all jobs): 00:14:00.356 READ: bw=57.9MiB/s (60.7MB/s), 8928KiB/s-22.0MiB/s (9143kB/s-23.0MB/s), io=58.1MiB (61.0MB), run=1001-1004msec 00:14:00.356 WRITE: bw=62.6MiB/s (65.6MB/s), 9.96MiB/s-22.8MiB/s (10.4MB/s-23.9MB/s), io=62.8MiB (65.9MB), run=1001-1004msec 00:14:00.356 00:14:00.356 Disk stats (read/write): 00:14:00.356 nvme0n1: ios=4817/5120, merge=0/0, ticks=26468/22721, in_queue=49189, util=89.98% 00:14:00.356 nvme0n2: ios=2061/2048, merge=0/0, ticks=13988/11696, in_queue=25684, util=89.37% 00:14:00.356 nvme0n3: ios=4117/4384, merge=0/0, ticks=12239/12444, in_queue=24683, util=89.79% 00:14:00.356 nvme0n4: ios=2016/2048, merge=0/0, ticks=13425/12269, in_queue=25694, util=89.63% 00:14:00.356 05:24:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:14:00.356 05:24:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66830 00:14:00.356 05:24:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:00.356 05:24:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:14:00.356 [global] 00:14:00.356 thread=1 00:14:00.356 invalidate=1 00:14:00.356 rw=read 00:14:00.356 time_based=1 00:14:00.356 runtime=10 00:14:00.356 ioengine=libaio 00:14:00.356 direct=1 00:14:00.356 bs=4096 00:14:00.356 iodepth=1 00:14:00.356 norandommap=1 00:14:00.356 numjobs=1 00:14:00.356 00:14:00.356 [job0] 00:14:00.356 filename=/dev/nvme0n1 00:14:00.356 [job1] 00:14:00.356 filename=/dev/nvme0n2 00:14:00.356 [job2] 00:14:00.356 filename=/dev/nvme0n3 00:14:00.356 [job3] 00:14:00.356 filename=/dev/nvme0n4 00:14:00.356 Could not set queue depth (nvme0n1) 00:14:00.356 Could not set queue depth (nvme0n2) 00:14:00.356 Could not set queue depth (nvme0n3) 00:14:00.356 Could not set queue depth (nvme0n4) 00:14:00.356 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:00.356 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:00.356 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:00.356 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:00.356 fio-3.35 00:14:00.356 Starting 4 threads 00:14:03.643 05:24:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:03.643 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=41218048, buflen=4096 00:14:03.643 fio: pid=66874, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:03.643 05:24:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:03.901 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=53047296, buflen=4096 00:14:03.901 fio: pid=66873, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:03.901 05:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:03.901 05:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:04.159 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=47869952, buflen=4096 00:14:04.159 fio: pid=66871, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:04.159 05:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:04.159 05:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:04.418 fio: pid=66872, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:04.418 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=7249920, buflen=4096 00:14:04.418 05:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:04.418 05:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:04.418 00:14:04.418 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66871: Wed Nov 20 05:24:18 2024 00:14:04.418 read: IOPS=3239, BW=12.7MiB/s (13.3MB/s)(45.7MiB/3608msec) 00:14:04.418 slat (usec): min=7, max=15786, avg=24.44, stdev=201.95 00:14:04.418 clat (usec): min=130, max=6135, avg=282.07, stdev=118.30 00:14:04.418 lat (usec): min=143, max=16023, avg=306.51, stdev=234.14 00:14:04.418 clat percentiles (usec): 00:14:04.418 | 1.00th=[ 147], 5.00th=[ 159], 10.00th=[ 172], 20.00th=[ 239], 00:14:04.418 | 30.00th=[ 249], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 277], 00:14:04.418 | 70.00th=[ 289], 80.00th=[ 322], 90.00th=[ 379], 95.00th=[ 424], 00:14:04.418 | 99.00th=[ 652], 99.50th=[ 709], 99.90th=[ 1156], 99.95th=[ 1876], 00:14:04.418 | 99.99th=[ 4555] 00:14:04.418 bw ( KiB/s): min= 9432, max=16272, per=23.96%, avg=12878.29, stdev=2132.71, samples=7 00:14:04.418 iops : min= 2358, max= 4068, avg=3219.57, stdev=533.18, samples=7 00:14:04.418 lat (usec) : 250=31.07%, 500=67.26%, 750=1.45%, 1000=0.08% 00:14:04.418 lat (msec) : 2=0.09%, 4=0.03%, 10=0.02% 00:14:04.418 cpu : usr=1.39%, sys=6.29%, ctx=11705, majf=0, minf=1 00:14:04.418 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:04.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.418 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.418 issued rwts: total=11688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.418 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:04.418 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66872: Wed Nov 20 05:24:18 2024 00:14:04.418 read: IOPS=4616, BW=18.0MiB/s (18.9MB/s)(70.9MiB/3933msec) 00:14:04.418 slat (usec): min=10, max=10645, avg=20.82, stdev=150.28 00:14:04.418 clat (usec): min=129, max=3257, avg=194.06, stdev=66.17 00:14:04.418 lat (usec): min=140, max=10909, avg=214.88, stdev=165.88 00:14:04.418 clat percentiles (usec): 00:14:04.418 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 161], 00:14:04.418 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 184], 00:14:04.418 | 70.00th=[ 196], 80.00th=[ 225], 90.00th=[ 258], 95.00th=[ 277], 00:14:04.418 | 99.00th=[ 408], 99.50th=[ 537], 99.90th=[ 701], 99.95th=[ 783], 00:14:04.418 | 99.99th=[ 2737] 00:14:04.418 bw ( KiB/s): min=12456, max=21102, per=34.01%, avg=18280.86, stdev=3028.20, samples=7 00:14:04.418 iops : min= 3114, max= 5275, avg=4570.14, stdev=756.97, samples=7 00:14:04.418 lat (usec) : 250=87.91%, 500=11.57%, 750=0.46%, 1000=0.02% 00:14:04.418 lat (msec) : 2=0.02%, 4=0.02% 00:14:04.418 cpu : usr=1.68%, sys=7.40%, ctx=18164, majf=0, minf=1 00:14:04.418 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:04.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.418 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.418 issued rwts: total=18155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.418 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:04.418 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66873: Wed Nov 20 05:24:18 2024 00:14:04.418 read: IOPS=3883, BW=15.2MiB/s (15.9MB/s)(50.6MiB/3335msec) 00:14:04.418 slat (usec): min=9, max=8701, avg=20.48, stdev=97.94 00:14:04.418 clat (usec): min=2, max=3000, avg=235.19, stdev=95.62 00:14:04.418 lat (usec): min=160, max=9180, avg=255.67, stdev=139.29 00:14:04.418 clat percentiles (usec): 00:14:04.418 | 1.00th=[ 161], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 182], 00:14:04.418 | 30.00th=[ 186], 40.00th=[ 194], 50.00th=[ 204], 60.00th=[ 221], 00:14:04.418 | 70.00th=[ 245], 80.00th=[ 269], 90.00th=[ 330], 95.00th=[ 404], 00:14:04.418 | 99.00th=[ 578], 99.50th=[ 709], 99.90th=[ 873], 99.95th=[ 1663], 00:14:04.418 | 99.99th=[ 2573] 00:14:04.418 bw ( KiB/s): min= 9712, max=18536, per=29.17%, avg=15679.33, stdev=3657.42, samples=6 00:14:04.418 iops : min= 2428, max= 4634, avg=3919.83, stdev=914.35, samples=6 00:14:04.418 lat (usec) : 4=0.01%, 250=72.31%, 500=26.31%, 750=1.18%, 1000=0.10% 00:14:04.418 lat (msec) : 2=0.05%, 4=0.03% 00:14:04.418 cpu : usr=1.71%, sys=6.27%, ctx=12957, majf=0, minf=2 00:14:04.418 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:04.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.418 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.418 issued rwts: total=12952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.418 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:04.418 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66874: Wed Nov 20 05:24:18 2024 00:14:04.418 read: IOPS=3289, BW=12.8MiB/s (13.5MB/s)(39.3MiB/3059msec) 00:14:04.418 slat (nsec): min=7983, max=81453, avg=21941.87, stdev=7958.93 00:14:04.418 clat (usec): min=163, max=8158, avg=279.96, stdev=124.24 00:14:04.418 lat (usec): min=189, max=8187, avg=301.90, stdev=125.69 00:14:04.418 clat percentiles (usec): 00:14:04.418 | 1.00th=[ 215], 5.00th=[ 231], 10.00th=[ 239], 20.00th=[ 247], 00:14:04.418 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 273], 00:14:04.418 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 347], 95.00th=[ 379], 00:14:04.418 | 99.00th=[ 453], 99.50th=[ 652], 99.90th=[ 725], 99.95th=[ 742], 00:14:04.418 | 99.99th=[ 7701] 00:14:04.418 bw ( KiB/s): min=12448, max=13688, per=24.44%, avg=13140.67, stdev=476.13, samples=6 00:14:04.418 iops : min= 3112, max= 3422, avg=3285.17, stdev=119.03, samples=6 00:14:04.418 lat (usec) : 250=25.82%, 500=73.33%, 750=0.78% 00:14:04.418 lat (msec) : 2=0.01%, 4=0.02%, 10=0.02% 00:14:04.418 cpu : usr=2.13%, sys=6.25%, ctx=10065, majf=0, minf=2 00:14:04.418 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:04.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.418 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.418 issued rwts: total=10064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.418 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:04.418 00:14:04.418 Run status group 0 (all jobs): 00:14:04.418 READ: bw=52.5MiB/s (55.0MB/s), 12.7MiB/s-18.0MiB/s (13.3MB/s-18.9MB/s), io=206MiB (216MB), run=3059-3933msec 00:14:04.418 00:14:04.418 Disk stats (read/write): 00:14:04.418 nvme0n1: ios=11688/0, merge=0/0, ticks=3164/0, in_queue=3164, util=95.06% 00:14:04.418 nvme0n2: ios=17788/0, merge=0/0, ticks=3482/0, in_queue=3482, util=95.66% 00:14:04.418 nvme0n3: ios=12247/0, merge=0/0, ticks=2857/0, in_queue=2857, util=96.70% 00:14:04.418 nvme0n4: ios=9350/0, merge=0/0, ticks=2513/0, in_queue=2513, util=96.51% 00:14:04.676 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:04.676 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:04.932 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:04.932 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:05.497 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:05.497 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:05.755 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:05.755 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:06.320 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:14:06.320 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66830 00:14:06.320 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:14:06.320 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:06.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.320 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:06.320 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:14:06.320 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:06.320 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:06.320 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:06.320 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:06.320 nvmf hotplug test: fio failed as expected 00:14:06.320 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:14:06.320 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:06.320 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:06.320 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.580 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:06.580 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:06.580 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:06.580 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:06.580 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:14:06.580 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:06.580 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:14:06.580 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:06.580 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:14:06.580 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:06.580 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:06.580 rmmod nvme_tcp 00:14:06.580 rmmod nvme_fabrics 00:14:06.580 rmmod nvme_keyring 00:14:06.580 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:06.580 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:14:06.580 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:14:06.580 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66440 ']' 00:14:06.580 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66440 00:14:06.580 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 66440 ']' 00:14:06.580 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 66440 00:14:06.580 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:14:06.580 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:06.580 05:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66440 00:14:06.580 killing process with pid 66440 00:14:06.581 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:06.581 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:06.581 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66440' 00:14:06.581 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 66440 00:14:06.581 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 66440 00:14:06.839 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:06.839 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:06.839 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:06.839 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:14:06.839 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:14:06.839 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:06.839 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:14:06.839 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:06.839 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:06.839 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:06.839 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:06.839 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:06.839 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:06.839 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:06.839 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:06.839 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:06.839 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:06.839 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:06.839 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:06.839 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:06.839 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:07.099 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:07.099 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:07.099 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.099 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:07.099 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.099 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:14:07.099 00:14:07.099 real 0m20.895s 00:14:07.099 user 1m19.247s 00:14:07.099 sys 0m10.899s 00:14:07.099 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:07.099 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.099 ************************************ 00:14:07.099 END TEST nvmf_fio_target 00:14:07.099 ************************************ 00:14:07.099 05:24:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:07.099 05:24:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:07.099 05:24:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:07.099 05:24:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:07.099 ************************************ 00:14:07.099 START TEST nvmf_bdevio 00:14:07.099 ************************************ 00:14:07.099 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:07.099 * Looking for test storage... 00:14:07.099 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:07.099 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:07.099 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:14:07.099 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:07.358 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:07.358 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:07.358 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:07.358 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:07.358 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:14:07.358 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:14:07.358 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:14:07.358 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:14:07.358 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:14:07.358 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:14:07.358 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:14:07.358 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:07.358 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:14:07.358 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:14:07.358 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:07.358 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:07.358 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:14:07.358 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:14:07.358 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:07.358 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:14:07.358 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:14:07.358 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:14:07.358 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:14:07.358 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:07.358 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:14:07.358 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:07.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.359 --rc genhtml_branch_coverage=1 00:14:07.359 --rc genhtml_function_coverage=1 00:14:07.359 --rc genhtml_legend=1 00:14:07.359 --rc geninfo_all_blocks=1 00:14:07.359 --rc geninfo_unexecuted_blocks=1 00:14:07.359 00:14:07.359 ' 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:07.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.359 --rc genhtml_branch_coverage=1 00:14:07.359 --rc genhtml_function_coverage=1 00:14:07.359 --rc genhtml_legend=1 00:14:07.359 --rc geninfo_all_blocks=1 00:14:07.359 --rc geninfo_unexecuted_blocks=1 00:14:07.359 00:14:07.359 ' 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:07.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.359 --rc genhtml_branch_coverage=1 00:14:07.359 --rc genhtml_function_coverage=1 00:14:07.359 --rc genhtml_legend=1 00:14:07.359 --rc geninfo_all_blocks=1 00:14:07.359 --rc geninfo_unexecuted_blocks=1 00:14:07.359 00:14:07.359 ' 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:07.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.359 --rc genhtml_branch_coverage=1 00:14:07.359 --rc genhtml_function_coverage=1 00:14:07.359 --rc genhtml_legend=1 00:14:07.359 --rc geninfo_all_blocks=1 00:14:07.359 --rc geninfo_unexecuted_blocks=1 00:14:07.359 00:14:07.359 ' 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:07.359 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:07.359 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:07.360 Cannot find device "nvmf_init_br" 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:07.360 Cannot find device "nvmf_init_br2" 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:07.360 Cannot find device "nvmf_tgt_br" 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:07.360 Cannot find device "nvmf_tgt_br2" 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:07.360 Cannot find device "nvmf_init_br" 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:07.360 Cannot find device "nvmf_init_br2" 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:07.360 Cannot find device "nvmf_tgt_br" 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:07.360 Cannot find device "nvmf_tgt_br2" 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:07.360 Cannot find device "nvmf_br" 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:07.360 Cannot find device "nvmf_init_if" 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:07.360 Cannot find device "nvmf_init_if2" 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:07.360 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:07.360 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:07.360 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:07.618 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:07.618 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:07.618 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:07.618 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:07.618 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:07.618 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:07.618 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:07.618 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:07.618 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:07.618 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:07.618 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:07.618 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:07.618 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:07.618 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:07.618 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:07.618 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:07.618 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:07.618 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:07.618 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:07.618 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:07.618 05:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:07.618 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:07.618 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:07.618 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:07.618 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:07.618 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:07.618 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:07.618 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:14:07.618 00:14:07.618 --- 10.0.0.3 ping statistics --- 00:14:07.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.618 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:14:07.618 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:07.618 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:07.618 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:14:07.618 00:14:07.618 --- 10.0.0.4 ping statistics --- 00:14:07.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.618 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:14:07.618 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:07.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:07.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:14:07.618 00:14:07.618 --- 10.0.0.1 ping statistics --- 00:14:07.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.618 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:14:07.618 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:07.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:07.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:14:07.618 00:14:07.618 --- 10.0.0.2 ping statistics --- 00:14:07.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.618 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:14:07.618 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:07.618 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:14:07.618 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:07.618 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:07.618 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:07.618 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:07.618 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:07.618 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:07.618 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:07.618 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:07.618 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:07.619 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:07.619 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:07.619 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:07.619 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=67194 00:14:07.619 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 67194 00:14:07.619 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 67194 ']' 00:14:07.619 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.619 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:07.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.619 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.619 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:07.619 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:07.619 [2024-11-20 05:24:22.116040] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:14:07.619 [2024-11-20 05:24:22.116115] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.877 [2024-11-20 05:24:22.268223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:07.877 [2024-11-20 05:24:22.307778] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:07.877 [2024-11-20 05:24:22.307839] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:07.877 [2024-11-20 05:24:22.307852] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:07.877 [2024-11-20 05:24:22.307862] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:07.877 [2024-11-20 05:24:22.307871] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:07.877 [2024-11-20 05:24:22.309006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:07.877 [2024-11-20 05:24:22.309134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:14:07.877 [2024-11-20 05:24:22.309216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:14:07.877 [2024-11-20 05:24:22.309225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:07.877 [2024-11-20 05:24:22.342879] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:08.812 [2024-11-20 05:24:23.203244] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:08.812 Malloc0 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:08.812 [2024-11-20 05:24:23.258988] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:08.812 { 00:14:08.812 "params": { 00:14:08.812 "name": "Nvme$subsystem", 00:14:08.812 "trtype": "$TEST_TRANSPORT", 00:14:08.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:08.812 "adrfam": "ipv4", 00:14:08.812 "trsvcid": "$NVMF_PORT", 00:14:08.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:08.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:08.812 "hdgst": ${hdgst:-false}, 00:14:08.812 "ddgst": ${ddgst:-false} 00:14:08.812 }, 00:14:08.812 "method": "bdev_nvme_attach_controller" 00:14:08.812 } 00:14:08.812 EOF 00:14:08.812 )") 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:14:08.812 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:08.812 "params": { 00:14:08.812 "name": "Nvme1", 00:14:08.812 "trtype": "tcp", 00:14:08.812 "traddr": "10.0.0.3", 00:14:08.812 "adrfam": "ipv4", 00:14:08.812 "trsvcid": "4420", 00:14:08.812 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:08.812 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:08.812 "hdgst": false, 00:14:08.812 "ddgst": false 00:14:08.812 }, 00:14:08.812 "method": "bdev_nvme_attach_controller" 00:14:08.812 }' 00:14:08.812 [2024-11-20 05:24:23.322658] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:14:09.070 [2024-11-20 05:24:23.322750] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67236 ] 00:14:09.070 [2024-11-20 05:24:23.479148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:09.070 [2024-11-20 05:24:23.521851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.070 [2024-11-20 05:24:23.521724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.070 [2024-11-20 05:24:23.521845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.070 [2024-11-20 05:24:23.563364] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:09.329 I/O targets: 00:14:09.329 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:09.329 00:14:09.329 00:14:09.330 CUnit - A unit testing framework for C - Version 2.1-3 00:14:09.330 http://cunit.sourceforge.net/ 00:14:09.330 00:14:09.330 00:14:09.330 Suite: bdevio tests on: Nvme1n1 00:14:09.330 Test: blockdev write read block ...passed 00:14:09.330 Test: blockdev write zeroes read block ...passed 00:14:09.330 Test: blockdev write zeroes read no split ...passed 00:14:09.330 Test: blockdev write zeroes read split ...passed 00:14:09.330 Test: blockdev write zeroes read split partial ...passed 00:14:09.330 Test: blockdev reset ...[2024-11-20 05:24:23.703792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:09.330 [2024-11-20 05:24:23.703928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2328180 (9): Bad file descriptor 00:14:09.330 [2024-11-20 05:24:23.719396] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:14:09.330 passed 00:14:09.330 Test: blockdev write read 8 blocks ...passed 00:14:09.330 Test: blockdev write read size > 128k ...passed 00:14:09.330 Test: blockdev write read invalid size ...passed 00:14:09.330 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:09.330 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:09.330 Test: blockdev write read max offset ...passed 00:14:09.330 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:09.330 Test: blockdev writev readv 8 blocks ...passed 00:14:09.330 Test: blockdev writev readv 30 x 1block ...passed 00:14:09.330 Test: blockdev writev readv block ...passed 00:14:09.330 Test: blockdev writev readv size > 128k ...passed 00:14:09.330 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:09.330 Test: blockdev comparev and writev ...[2024-11-20 05:24:23.727826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:09.330 [2024-11-20 05:24:23.727999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:09.330 [2024-11-20 05:24:23.728142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:09.330 [2024-11-20 05:24:23.728269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:09.330 [2024-11-20 05:24:23.728680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:09.330 [2024-11-20 05:24:23.728806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:09.330 [2024-11-20 05:24:23.728929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:09.330 [2024-11-20 05:24:23.729044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:09.330 [2024-11-20 05:24:23.729516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:09.330 [2024-11-20 05:24:23.729659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:09.330 [2024-11-20 05:24:23.729759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:09.330 [2024-11-20 05:24:23.729862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:09.330 [2024-11-20 05:24:23.730445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:09.330 [2024-11-20 05:24:23.730549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:09.330 [2024-11-20 05:24:23.730650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:09.330 [2024-11-20 05:24:23.730721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:09.330 passed 00:14:09.330 Test: blockdev nvme passthru rw ...passed 00:14:09.330 Test: blockdev nvme passthru vendor specific ...[2024-11-20 05:24:23.731696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:09.330 [2024-11-20 05:24:23.731816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:09.330 [2024-11-20 05:24:23.732050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:09.330 [2024-11-20 05:24:23.732197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:09.330 [2024-11-20 05:24:23.732401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:09.330 [2024-11-20 05:24:23.732502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:09.330 [2024-11-20 05:24:23.732709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:09.330 [2024-11-20 05:24:23.732821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:09.330 passed 00:14:09.330 Test: blockdev nvme admin passthru ...passed 00:14:09.330 Test: blockdev copy ...passed 00:14:09.330 00:14:09.330 Run Summary: Type Total Ran Passed Failed Inactive 00:14:09.330 suites 1 1 n/a 0 0 00:14:09.330 tests 23 23 23 0 0 00:14:09.330 asserts 152 152 152 0 n/a 00:14:09.330 00:14:09.330 Elapsed time = 0.145 seconds 00:14:09.589 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:09.589 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.589 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:09.589 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.589 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:09.589 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:09.589 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:09.589 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:14:09.589 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:09.589 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:14:09.589 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:09.589 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:09.589 rmmod nvme_tcp 00:14:09.589 rmmod nvme_fabrics 00:14:09.589 rmmod nvme_keyring 00:14:09.589 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:09.589 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:14:09.589 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:14:09.590 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 67194 ']' 00:14:09.590 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 67194 00:14:09.590 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 67194 ']' 00:14:09.590 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 67194 00:14:09.590 05:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:14:09.590 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:09.590 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67194 00:14:09.590 killing process with pid 67194 00:14:09.590 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:14:09.590 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:14:09.590 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67194' 00:14:09.590 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 67194 00:14:09.590 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 67194 00:14:09.848 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:09.848 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:09.848 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:09.848 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:14:09.848 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:09.848 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:14:09.848 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:14:09.848 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:09.848 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:09.848 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:09.848 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:09.848 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:09.848 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:09.848 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:09.848 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:09.848 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:09.848 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:09.848 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:09.848 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:09.848 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:10.106 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:10.106 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:10.106 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:10.106 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.106 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:10.106 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.106 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:14:10.106 00:14:10.106 real 0m2.970s 00:14:10.106 user 0m8.936s 00:14:10.106 sys 0m0.756s 00:14:10.106 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:10.106 ************************************ 00:14:10.106 END TEST nvmf_bdevio 00:14:10.106 05:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:10.106 ************************************ 00:14:10.106 05:24:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:10.106 ************************************ 00:14:10.106 END TEST nvmf_target_core 00:14:10.106 ************************************ 00:14:10.106 00:14:10.106 real 2m34.990s 00:14:10.106 user 6m51.675s 00:14:10.106 sys 0m52.419s 00:14:10.106 05:24:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:10.106 05:24:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:10.106 05:24:24 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:10.106 05:24:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:10.106 05:24:24 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:10.106 05:24:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:10.106 ************************************ 00:14:10.106 START TEST nvmf_target_extra 00:14:10.106 ************************************ 00:14:10.106 05:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:10.106 * Looking for test storage... 00:14:10.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:14:10.106 05:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:10.106 05:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:14:10.106 05:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:10.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.365 --rc genhtml_branch_coverage=1 00:14:10.365 --rc genhtml_function_coverage=1 00:14:10.365 --rc genhtml_legend=1 00:14:10.365 --rc geninfo_all_blocks=1 00:14:10.365 --rc geninfo_unexecuted_blocks=1 00:14:10.365 00:14:10.365 ' 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:10.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.365 --rc genhtml_branch_coverage=1 00:14:10.365 --rc genhtml_function_coverage=1 00:14:10.365 --rc genhtml_legend=1 00:14:10.365 --rc geninfo_all_blocks=1 00:14:10.365 --rc geninfo_unexecuted_blocks=1 00:14:10.365 00:14:10.365 ' 00:14:10.365 05:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:10.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.365 --rc genhtml_branch_coverage=1 00:14:10.365 --rc genhtml_function_coverage=1 00:14:10.365 --rc genhtml_legend=1 00:14:10.365 --rc geninfo_all_blocks=1 00:14:10.365 --rc geninfo_unexecuted_blocks=1 00:14:10.366 00:14:10.366 ' 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:10.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.366 --rc genhtml_branch_coverage=1 00:14:10.366 --rc genhtml_function_coverage=1 00:14:10.366 --rc genhtml_legend=1 00:14:10.366 --rc geninfo_all_blocks=1 00:14:10.366 --rc geninfo_unexecuted_blocks=1 00:14:10.366 00:14:10.366 ' 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:10.366 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:10.366 ************************************ 00:14:10.366 START TEST nvmf_auth_target 00:14:10.366 ************************************ 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:10.366 * Looking for test storage... 00:14:10.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:10.366 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:14:10.626 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:10.626 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:10.626 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:10.626 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:10.626 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:10.626 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:10.626 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:10.626 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:10.626 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:10.626 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:10.626 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:10.626 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:10.626 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:10.626 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:10.626 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:10.626 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:10.626 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:10.626 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:10.626 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:10.626 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:10.626 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:10.626 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:10.626 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:10.626 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:10.626 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:10.626 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:10.626 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:10.626 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:10.626 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:10.626 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:10.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.627 --rc genhtml_branch_coverage=1 00:14:10.627 --rc genhtml_function_coverage=1 00:14:10.627 --rc genhtml_legend=1 00:14:10.627 --rc geninfo_all_blocks=1 00:14:10.627 --rc geninfo_unexecuted_blocks=1 00:14:10.627 00:14:10.627 ' 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:10.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.627 --rc genhtml_branch_coverage=1 00:14:10.627 --rc genhtml_function_coverage=1 00:14:10.627 --rc genhtml_legend=1 00:14:10.627 --rc geninfo_all_blocks=1 00:14:10.627 --rc geninfo_unexecuted_blocks=1 00:14:10.627 00:14:10.627 ' 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:10.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.627 --rc genhtml_branch_coverage=1 00:14:10.627 --rc genhtml_function_coverage=1 00:14:10.627 --rc genhtml_legend=1 00:14:10.627 --rc geninfo_all_blocks=1 00:14:10.627 --rc geninfo_unexecuted_blocks=1 00:14:10.627 00:14:10.627 ' 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:10.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.627 --rc genhtml_branch_coverage=1 00:14:10.627 --rc genhtml_function_coverage=1 00:14:10.627 --rc genhtml_legend=1 00:14:10.627 --rc geninfo_all_blocks=1 00:14:10.627 --rc geninfo_unexecuted_blocks=1 00:14:10.627 00:14:10.627 ' 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:10.627 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:10.627 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:10.628 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:10.628 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:10.628 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:10.628 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.628 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:10.628 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:10.628 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:10.628 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:10.628 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:10.628 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:10.628 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.628 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:10.628 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:10.628 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:10.628 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:10.628 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:10.628 Cannot find device "nvmf_init_br" 00:14:10.628 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:14:10.628 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:10.628 Cannot find device "nvmf_init_br2" 00:14:10.628 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:14:10.628 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:10.628 Cannot find device "nvmf_tgt_br" 00:14:10.628 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:14:10.628 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:10.628 Cannot find device "nvmf_tgt_br2" 00:14:10.628 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:14:10.628 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:10.628 Cannot find device "nvmf_init_br" 00:14:10.628 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:14:10.628 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:10.628 Cannot find device "nvmf_init_br2" 00:14:10.628 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:14:10.628 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:10.628 Cannot find device "nvmf_tgt_br" 00:14:10.628 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:14:10.628 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:10.628 Cannot find device "nvmf_tgt_br2" 00:14:10.628 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:14:10.628 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:10.628 Cannot find device "nvmf_br" 00:14:10.628 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:14:10.628 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:10.628 Cannot find device "nvmf_init_if" 00:14:10.628 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:14:10.628 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:10.628 Cannot find device "nvmf_init_if2" 00:14:10.628 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:14:10.628 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:10.628 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:10.628 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:14:10.628 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:10.628 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:10.628 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:14:10.628 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:10.628 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:10.628 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:10.887 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:10.887 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:14:10.887 00:14:10.887 --- 10.0.0.3 ping statistics --- 00:14:10.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.887 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:10.887 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:10.887 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:14:10.887 00:14:10.887 --- 10.0.0.4 ping statistics --- 00:14:10.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.887 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:10.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:14:10.887 00:14:10.887 --- 10.0.0.1 ping statistics --- 00:14:10.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.887 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:10.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:14:10.887 00:14:10.887 --- 10.0.0.2 ping statistics --- 00:14:10.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.887 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:10.887 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:11.146 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:11.146 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:11.146 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.146 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67525 00:14:11.146 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67525 00:14:11.146 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 67525 ']' 00:14:11.146 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:11.146 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.146 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:11.146 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.146 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:11.146 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67549 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ef4ba22588044a29a2b494ccd448825cbde28d6f8a5a97f2 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.R25 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ef4ba22588044a29a2b494ccd448825cbde28d6f8a5a97f2 0 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ef4ba22588044a29a2b494ccd448825cbde28d6f8a5a97f2 0 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ef4ba22588044a29a2b494ccd448825cbde28d6f8a5a97f2 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.R25 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.R25 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.R25 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3151bea3a9d78cf841ed1e09908ebe84aceda429c9534b1cddd170d806066000 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ch5 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3151bea3a9d78cf841ed1e09908ebe84aceda429c9534b1cddd170d806066000 3 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3151bea3a9d78cf841ed1e09908ebe84aceda429c9534b1cddd170d806066000 3 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3151bea3a9d78cf841ed1e09908ebe84aceda429c9534b1cddd170d806066000 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ch5 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ch5 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.ch5 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a5b1c667c1d923e4c60115e499f56cec 00:14:11.405 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:11.665 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Jb7 00:14:11.665 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a5b1c667c1d923e4c60115e499f56cec 1 00:14:11.665 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a5b1c667c1d923e4c60115e499f56cec 1 00:14:11.665 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:11.665 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:11.665 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a5b1c667c1d923e4c60115e499f56cec 00:14:11.665 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:11.665 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:11.665 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Jb7 00:14:11.665 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Jb7 00:14:11.665 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Jb7 00:14:11.665 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:11.665 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:11.665 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:11.665 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:11.665 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:11.665 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:11.665 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:11.665 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=41c893131132b4733b49c639ad474789e27cc3ebed93c72e 00:14:11.665 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:11.665 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.RjT 00:14:11.665 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 41c893131132b4733b49c639ad474789e27cc3ebed93c72e 2 00:14:11.665 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 41c893131132b4733b49c639ad474789e27cc3ebed93c72e 2 00:14:11.665 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:11.665 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:11.665 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=41c893131132b4733b49c639ad474789e27cc3ebed93c72e 00:14:11.665 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:11.665 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.RjT 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.RjT 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.RjT 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fdc66ef9e5d42216feb132f638602efb703b4690cc450f0b 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.kNo 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fdc66ef9e5d42216feb132f638602efb703b4690cc450f0b 2 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fdc66ef9e5d42216feb132f638602efb703b4690cc450f0b 2 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fdc66ef9e5d42216feb132f638602efb703b4690cc450f0b 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.kNo 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.kNo 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.kNo 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5157a34774f1594e190fce7b15375144 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.I39 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5157a34774f1594e190fce7b15375144 1 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5157a34774f1594e190fce7b15375144 1 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5157a34774f1594e190fce7b15375144 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.I39 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.I39 00:14:11.665 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.I39 00:14:11.666 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:11.666 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:11.666 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:11.666 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:11.666 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:11.666 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:11.666 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:11.666 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=408570a93224d90f9fc00f45a0a81fd480c610d3a0bf99b48b8d52ec732c4b86 00:14:11.666 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:11.666 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.8fd 00:14:11.666 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 408570a93224d90f9fc00f45a0a81fd480c610d3a0bf99b48b8d52ec732c4b86 3 00:14:11.666 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 408570a93224d90f9fc00f45a0a81fd480c610d3a0bf99b48b8d52ec732c4b86 3 00:14:11.666 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:11.923 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:11.923 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=408570a93224d90f9fc00f45a0a81fd480c610d3a0bf99b48b8d52ec732c4b86 00:14:11.923 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:11.923 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:11.923 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.8fd 00:14:11.923 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.8fd 00:14:11.923 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.8fd 00:14:11.923 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:11.923 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67525 00:14:11.923 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 67525 ']' 00:14:11.923 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.923 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:11.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.923 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.923 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:11.923 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.181 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:12.181 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:14:12.181 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67549 /var/tmp/host.sock 00:14:12.181 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 67549 ']' 00:14:12.181 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:14:12.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:12.181 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:12.181 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:12.181 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:12.181 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.439 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:12.439 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:14:12.439 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:12.439 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.439 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.439 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.439 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:12.439 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.R25 00:14:12.439 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.439 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.439 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.439 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.R25 00:14:12.439 05:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.R25 00:14:13.007 05:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.ch5 ]] 00:14:13.007 05:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ch5 00:14:13.007 05:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.007 05:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.007 05:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.007 05:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ch5 00:14:13.007 05:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ch5 00:14:13.270 05:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:13.270 05:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Jb7 00:14:13.270 05:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.270 05:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.270 05:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.270 05:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Jb7 00:14:13.270 05:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Jb7 00:14:13.529 05:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.RjT ]] 00:14:13.529 05:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RjT 00:14:13.529 05:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.529 05:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.529 05:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.529 05:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RjT 00:14:13.529 05:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RjT 00:14:13.788 05:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:13.788 05:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.kNo 00:14:13.788 05:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.788 05:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.788 05:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.788 05:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.kNo 00:14:13.788 05:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.kNo 00:14:14.047 05:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.I39 ]] 00:14:14.047 05:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.I39 00:14:14.047 05:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.047 05:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.047 05:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.047 05:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.I39 00:14:14.047 05:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.I39 00:14:14.306 05:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:14.306 05:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.8fd 00:14:14.306 05:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.306 05:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.306 05:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.306 05:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.8fd 00:14:14.306 05:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.8fd 00:14:14.872 05:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:14.872 05:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:14.872 05:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:14.872 05:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:14.872 05:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:14.872 05:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:15.130 05:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:15.130 05:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:15.130 05:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:15.130 05:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:15.130 05:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:15.130 05:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.130 05:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.130 05:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.130 05:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.130 05:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.130 05:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.130 05:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.130 05:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.388 00:14:15.388 05:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:15.388 05:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.388 05:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:15.956 05:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.956 05:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.956 05:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.956 05:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.956 05:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.956 05:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:15.956 { 00:14:15.956 "cntlid": 1, 00:14:15.956 "qid": 0, 00:14:15.956 "state": "enabled", 00:14:15.956 "thread": "nvmf_tgt_poll_group_000", 00:14:15.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:14:15.956 "listen_address": { 00:14:15.956 "trtype": "TCP", 00:14:15.956 "adrfam": "IPv4", 00:14:15.956 "traddr": "10.0.0.3", 00:14:15.956 "trsvcid": "4420" 00:14:15.956 }, 00:14:15.956 "peer_address": { 00:14:15.956 "trtype": "TCP", 00:14:15.956 "adrfam": "IPv4", 00:14:15.956 "traddr": "10.0.0.1", 00:14:15.956 "trsvcid": "39882" 00:14:15.956 }, 00:14:15.956 "auth": { 00:14:15.956 "state": "completed", 00:14:15.956 "digest": "sha256", 00:14:15.956 "dhgroup": "null" 00:14:15.956 } 00:14:15.956 } 00:14:15.956 ]' 00:14:15.956 05:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:15.956 05:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:15.956 05:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:15.956 05:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:15.956 05:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:15.956 05:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.956 05:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.956 05:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.215 05:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:14:16.215 05:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:14:21.498 05:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.498 05:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:14:21.498 05:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.498 05:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.498 05:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.498 05:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:21.498 05:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:21.498 05:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:21.498 05:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:21.498 05:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:21.498 05:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:21.498 05:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:21.498 05:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:21.498 05:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.498 05:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.498 05:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.498 05:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.498 05:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.498 05:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.498 05:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.498 05:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.757 00:14:21.757 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:21.757 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:21.757 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.016 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.016 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.016 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.016 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.016 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.016 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:22.016 { 00:14:22.016 "cntlid": 3, 00:14:22.016 "qid": 0, 00:14:22.016 "state": "enabled", 00:14:22.016 "thread": "nvmf_tgt_poll_group_000", 00:14:22.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:14:22.016 "listen_address": { 00:14:22.016 "trtype": "TCP", 00:14:22.016 "adrfam": "IPv4", 00:14:22.016 "traddr": "10.0.0.3", 00:14:22.016 "trsvcid": "4420" 00:14:22.016 }, 00:14:22.016 "peer_address": { 00:14:22.016 "trtype": "TCP", 00:14:22.016 "adrfam": "IPv4", 00:14:22.016 "traddr": "10.0.0.1", 00:14:22.016 "trsvcid": "37738" 00:14:22.016 }, 00:14:22.016 "auth": { 00:14:22.016 "state": "completed", 00:14:22.016 "digest": "sha256", 00:14:22.016 "dhgroup": "null" 00:14:22.016 } 00:14:22.016 } 00:14:22.016 ]' 00:14:22.016 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:22.016 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:22.016 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:22.275 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:22.275 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:22.275 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.275 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.275 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.533 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:14:22.533 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:14:23.468 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.469 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:14:23.469 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.469 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.469 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.469 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:23.469 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:23.469 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:23.727 05:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:14:23.728 05:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:23.728 05:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:23.728 05:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:23.728 05:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:23.728 05:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.728 05:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.728 05:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.728 05:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.728 05:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.728 05:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.728 05:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.728 05:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.987 00:14:23.987 05:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:23.987 05:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.987 05:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:24.246 05:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.246 05:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.246 05:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.246 05:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.504 05:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.504 05:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:24.504 { 00:14:24.504 "cntlid": 5, 00:14:24.504 "qid": 0, 00:14:24.504 "state": "enabled", 00:14:24.504 "thread": "nvmf_tgt_poll_group_000", 00:14:24.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:14:24.504 "listen_address": { 00:14:24.504 "trtype": "TCP", 00:14:24.504 "adrfam": "IPv4", 00:14:24.504 "traddr": "10.0.0.3", 00:14:24.504 "trsvcid": "4420" 00:14:24.504 }, 00:14:24.504 "peer_address": { 00:14:24.504 "trtype": "TCP", 00:14:24.504 "adrfam": "IPv4", 00:14:24.504 "traddr": "10.0.0.1", 00:14:24.504 "trsvcid": "37782" 00:14:24.504 }, 00:14:24.504 "auth": { 00:14:24.504 "state": "completed", 00:14:24.504 "digest": "sha256", 00:14:24.504 "dhgroup": "null" 00:14:24.504 } 00:14:24.504 } 00:14:24.504 ]' 00:14:24.504 05:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:24.504 05:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:24.504 05:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:24.504 05:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:24.504 05:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:24.504 05:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.504 05:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.504 05:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.763 05:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:14:24.763 05:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:14:25.699 05:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.699 05:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:14:25.699 05:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.699 05:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.699 05:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.699 05:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:25.700 05:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:25.700 05:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:25.958 05:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:14:25.958 05:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:25.958 05:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:25.958 05:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:25.958 05:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:25.958 05:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.958 05:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key3 00:14:25.958 05:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.958 05:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.958 05:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.958 05:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:25.958 05:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:25.958 05:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:26.216 00:14:26.216 05:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:26.216 05:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:26.217 05:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.475 05:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.475 05:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.475 05:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.475 05:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.475 05:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.475 05:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:26.475 { 00:14:26.475 "cntlid": 7, 00:14:26.475 "qid": 0, 00:14:26.475 "state": "enabled", 00:14:26.475 "thread": "nvmf_tgt_poll_group_000", 00:14:26.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:14:26.475 "listen_address": { 00:14:26.475 "trtype": "TCP", 00:14:26.475 "adrfam": "IPv4", 00:14:26.475 "traddr": "10.0.0.3", 00:14:26.475 "trsvcid": "4420" 00:14:26.475 }, 00:14:26.475 "peer_address": { 00:14:26.475 "trtype": "TCP", 00:14:26.475 "adrfam": "IPv4", 00:14:26.475 "traddr": "10.0.0.1", 00:14:26.475 "trsvcid": "56500" 00:14:26.475 }, 00:14:26.475 "auth": { 00:14:26.475 "state": "completed", 00:14:26.475 "digest": "sha256", 00:14:26.475 "dhgroup": "null" 00:14:26.475 } 00:14:26.475 } 00:14:26.475 ]' 00:14:26.475 05:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:26.734 05:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.734 05:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:26.734 05:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:26.734 05:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:26.734 05:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.734 05:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.734 05:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.993 05:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:14:26.993 05:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:14:27.928 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.928 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:14:27.928 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.928 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.928 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.928 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:27.928 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:27.928 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:27.928 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:27.928 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:14:27.928 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:27.928 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:27.928 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:27.928 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:27.928 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.928 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.928 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.928 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.186 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.186 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.186 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.186 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.445 00:14:28.445 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:28.445 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:28.445 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.703 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.703 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.703 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.703 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.962 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.962 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:28.962 { 00:14:28.962 "cntlid": 9, 00:14:28.962 "qid": 0, 00:14:28.962 "state": "enabled", 00:14:28.962 "thread": "nvmf_tgt_poll_group_000", 00:14:28.962 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:14:28.962 "listen_address": { 00:14:28.962 "trtype": "TCP", 00:14:28.962 "adrfam": "IPv4", 00:14:28.962 "traddr": "10.0.0.3", 00:14:28.962 "trsvcid": "4420" 00:14:28.962 }, 00:14:28.962 "peer_address": { 00:14:28.962 "trtype": "TCP", 00:14:28.962 "adrfam": "IPv4", 00:14:28.962 "traddr": "10.0.0.1", 00:14:28.962 "trsvcid": "56528" 00:14:28.962 }, 00:14:28.962 "auth": { 00:14:28.962 "state": "completed", 00:14:28.962 "digest": "sha256", 00:14:28.962 "dhgroup": "ffdhe2048" 00:14:28.962 } 00:14:28.962 } 00:14:28.962 ]' 00:14:28.962 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:28.962 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:28.962 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:28.962 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:28.962 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:28.962 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.962 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.962 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.220 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:14:29.220 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:14:30.155 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.155 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:14:30.155 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.155 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.155 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.155 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:30.155 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:30.155 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:30.413 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:14:30.413 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:30.413 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:30.413 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:30.413 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:30.414 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.414 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.414 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.414 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.414 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.414 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.414 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.414 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.672 00:14:30.672 05:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:30.672 05:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.672 05:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:31.239 05:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.239 05:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.239 05:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.239 05:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.239 05:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.239 05:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:31.239 { 00:14:31.239 "cntlid": 11, 00:14:31.239 "qid": 0, 00:14:31.239 "state": "enabled", 00:14:31.239 "thread": "nvmf_tgt_poll_group_000", 00:14:31.239 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:14:31.239 "listen_address": { 00:14:31.239 "trtype": "TCP", 00:14:31.239 "adrfam": "IPv4", 00:14:31.239 "traddr": "10.0.0.3", 00:14:31.239 "trsvcid": "4420" 00:14:31.239 }, 00:14:31.239 "peer_address": { 00:14:31.239 "trtype": "TCP", 00:14:31.239 "adrfam": "IPv4", 00:14:31.239 "traddr": "10.0.0.1", 00:14:31.240 "trsvcid": "56558" 00:14:31.240 }, 00:14:31.240 "auth": { 00:14:31.240 "state": "completed", 00:14:31.240 "digest": "sha256", 00:14:31.240 "dhgroup": "ffdhe2048" 00:14:31.240 } 00:14:31.240 } 00:14:31.240 ]' 00:14:31.240 05:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:31.240 05:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:31.240 05:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:31.240 05:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:31.240 05:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:31.240 05:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.240 05:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.240 05:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.498 05:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:14:31.498 05:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:14:32.433 05:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.433 05:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:14:32.433 05:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.433 05:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.433 05:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.433 05:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:32.433 05:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:32.433 05:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:32.692 05:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:14:32.692 05:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:32.692 05:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:32.692 05:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:32.692 05:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:32.692 05:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.692 05:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.692 05:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.692 05:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.692 05:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.692 05:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.692 05:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.692 05:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.949 00:14:32.949 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.949 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.949 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:33.515 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.515 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.515 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.515 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.515 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.515 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:33.515 { 00:14:33.515 "cntlid": 13, 00:14:33.515 "qid": 0, 00:14:33.515 "state": "enabled", 00:14:33.515 "thread": "nvmf_tgt_poll_group_000", 00:14:33.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:14:33.515 "listen_address": { 00:14:33.515 "trtype": "TCP", 00:14:33.515 "adrfam": "IPv4", 00:14:33.515 "traddr": "10.0.0.3", 00:14:33.515 "trsvcid": "4420" 00:14:33.515 }, 00:14:33.515 "peer_address": { 00:14:33.515 "trtype": "TCP", 00:14:33.515 "adrfam": "IPv4", 00:14:33.515 "traddr": "10.0.0.1", 00:14:33.515 "trsvcid": "56580" 00:14:33.515 }, 00:14:33.515 "auth": { 00:14:33.515 "state": "completed", 00:14:33.515 "digest": "sha256", 00:14:33.515 "dhgroup": "ffdhe2048" 00:14:33.515 } 00:14:33.515 } 00:14:33.515 ]' 00:14:33.515 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:33.515 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:33.515 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:33.515 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:33.515 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:33.515 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.515 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.515 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.082 05:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:14:34.082 05:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:14:34.673 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.673 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:14:34.673 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.674 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.674 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.674 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:34.674 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:34.674 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:34.930 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:14:34.930 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:34.930 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:34.930 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:34.930 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:34.930 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.930 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key3 00:14:34.930 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.930 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.931 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.931 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:34.931 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:34.931 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:35.188 00:14:35.188 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:35.188 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:35.188 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.476 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.476 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.476 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.476 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.476 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.476 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:35.476 { 00:14:35.476 "cntlid": 15, 00:14:35.476 "qid": 0, 00:14:35.476 "state": "enabled", 00:14:35.476 "thread": "nvmf_tgt_poll_group_000", 00:14:35.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:14:35.476 "listen_address": { 00:14:35.476 "trtype": "TCP", 00:14:35.476 "adrfam": "IPv4", 00:14:35.476 "traddr": "10.0.0.3", 00:14:35.476 "trsvcid": "4420" 00:14:35.476 }, 00:14:35.476 "peer_address": { 00:14:35.476 "trtype": "TCP", 00:14:35.476 "adrfam": "IPv4", 00:14:35.476 "traddr": "10.0.0.1", 00:14:35.476 "trsvcid": "56594" 00:14:35.476 }, 00:14:35.476 "auth": { 00:14:35.476 "state": "completed", 00:14:35.476 "digest": "sha256", 00:14:35.476 "dhgroup": "ffdhe2048" 00:14:35.476 } 00:14:35.476 } 00:14:35.476 ]' 00:14:35.476 05:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:35.735 05:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:35.735 05:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:35.735 05:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:35.735 05:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:35.735 05:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.735 05:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.735 05:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.301 05:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:14:36.301 05:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:14:36.867 05:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.867 05:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:14:36.867 05:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.867 05:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.867 05:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.867 05:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:36.867 05:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.867 05:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:36.867 05:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:37.126 05:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:14:37.126 05:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:37.126 05:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:37.126 05:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:37.126 05:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:37.126 05:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.126 05:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.126 05:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.126 05:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.126 05:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.126 05:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.126 05:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.126 05:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.692 00:14:37.692 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.692 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.692 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.951 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.951 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.951 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.951 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.951 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.951 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.951 { 00:14:37.951 "cntlid": 17, 00:14:37.951 "qid": 0, 00:14:37.951 "state": "enabled", 00:14:37.951 "thread": "nvmf_tgt_poll_group_000", 00:14:37.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:14:37.951 "listen_address": { 00:14:37.951 "trtype": "TCP", 00:14:37.951 "adrfam": "IPv4", 00:14:37.951 "traddr": "10.0.0.3", 00:14:37.951 "trsvcid": "4420" 00:14:37.951 }, 00:14:37.951 "peer_address": { 00:14:37.951 "trtype": "TCP", 00:14:37.951 "adrfam": "IPv4", 00:14:37.951 "traddr": "10.0.0.1", 00:14:37.951 "trsvcid": "40326" 00:14:37.951 }, 00:14:37.951 "auth": { 00:14:37.951 "state": "completed", 00:14:37.951 "digest": "sha256", 00:14:37.951 "dhgroup": "ffdhe3072" 00:14:37.951 } 00:14:37.951 } 00:14:37.951 ]' 00:14:37.951 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:38.210 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:38.210 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:38.210 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:38.210 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:38.210 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.210 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.210 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.468 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:14:38.468 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:14:39.404 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.404 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:14:39.404 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.404 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.404 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.404 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:39.404 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:39.404 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:39.720 05:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:14:39.720 05:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:39.720 05:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:39.720 05:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:39.720 05:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:39.720 05:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.720 05:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.720 05:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.720 05:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.720 05:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.720 05:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.720 05:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.720 05:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.979 00:14:39.979 05:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.979 05:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.979 05:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.238 05:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.238 05:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.238 05:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.238 05:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.238 05:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.238 05:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:40.238 { 00:14:40.238 "cntlid": 19, 00:14:40.238 "qid": 0, 00:14:40.238 "state": "enabled", 00:14:40.238 "thread": "nvmf_tgt_poll_group_000", 00:14:40.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:14:40.238 "listen_address": { 00:14:40.238 "trtype": "TCP", 00:14:40.238 "adrfam": "IPv4", 00:14:40.238 "traddr": "10.0.0.3", 00:14:40.238 "trsvcid": "4420" 00:14:40.238 }, 00:14:40.238 "peer_address": { 00:14:40.238 "trtype": "TCP", 00:14:40.238 "adrfam": "IPv4", 00:14:40.238 "traddr": "10.0.0.1", 00:14:40.238 "trsvcid": "40344" 00:14:40.238 }, 00:14:40.238 "auth": { 00:14:40.238 "state": "completed", 00:14:40.238 "digest": "sha256", 00:14:40.238 "dhgroup": "ffdhe3072" 00:14:40.238 } 00:14:40.238 } 00:14:40.238 ]' 00:14:40.238 05:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:40.238 05:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:40.238 05:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:40.496 05:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:40.496 05:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:40.496 05:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.496 05:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.496 05:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.755 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:14:40.756 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:14:41.323 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.323 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:14:41.323 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.323 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.323 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.323 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:41.323 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:41.323 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:41.890 05:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:14:41.890 05:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.890 05:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:41.890 05:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:41.890 05:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:41.890 05:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.890 05:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.890 05:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.890 05:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.890 05:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.890 05:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.890 05:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.890 05:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.148 00:14:42.148 05:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:42.148 05:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:42.148 05:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.406 05:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.406 05:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.406 05:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.406 05:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.406 05:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.406 05:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:42.406 { 00:14:42.406 "cntlid": 21, 00:14:42.406 "qid": 0, 00:14:42.406 "state": "enabled", 00:14:42.406 "thread": "nvmf_tgt_poll_group_000", 00:14:42.406 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:14:42.406 "listen_address": { 00:14:42.406 "trtype": "TCP", 00:14:42.406 "adrfam": "IPv4", 00:14:42.406 "traddr": "10.0.0.3", 00:14:42.406 "trsvcid": "4420" 00:14:42.406 }, 00:14:42.406 "peer_address": { 00:14:42.406 "trtype": "TCP", 00:14:42.406 "adrfam": "IPv4", 00:14:42.406 "traddr": "10.0.0.1", 00:14:42.406 "trsvcid": "40370" 00:14:42.406 }, 00:14:42.406 "auth": { 00:14:42.406 "state": "completed", 00:14:42.406 "digest": "sha256", 00:14:42.406 "dhgroup": "ffdhe3072" 00:14:42.406 } 00:14:42.406 } 00:14:42.406 ]' 00:14:42.406 05:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.406 05:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:42.406 05:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.665 05:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:42.665 05:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.665 05:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.665 05:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.665 05:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.923 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:14:42.923 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:14:43.488 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.488 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:14:43.488 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.488 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.488 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.488 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.488 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:43.488 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:44.053 05:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:14:44.053 05:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:44.053 05:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:44.053 05:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:44.053 05:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:44.053 05:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.053 05:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key3 00:14:44.054 05:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.054 05:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.054 05:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.054 05:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:44.054 05:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:44.054 05:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:44.311 00:14:44.311 05:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:44.311 05:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.311 05:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:44.570 05:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.570 05:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.570 05:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.570 05:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.570 05:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.570 05:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.570 { 00:14:44.570 "cntlid": 23, 00:14:44.570 "qid": 0, 00:14:44.570 "state": "enabled", 00:14:44.570 "thread": "nvmf_tgt_poll_group_000", 00:14:44.570 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:14:44.570 "listen_address": { 00:14:44.570 "trtype": "TCP", 00:14:44.570 "adrfam": "IPv4", 00:14:44.570 "traddr": "10.0.0.3", 00:14:44.570 "trsvcid": "4420" 00:14:44.570 }, 00:14:44.570 "peer_address": { 00:14:44.570 "trtype": "TCP", 00:14:44.570 "adrfam": "IPv4", 00:14:44.570 "traddr": "10.0.0.1", 00:14:44.570 "trsvcid": "40392" 00:14:44.570 }, 00:14:44.570 "auth": { 00:14:44.570 "state": "completed", 00:14:44.570 "digest": "sha256", 00:14:44.570 "dhgroup": "ffdhe3072" 00:14:44.570 } 00:14:44.570 } 00:14:44.570 ]' 00:14:44.570 05:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.570 05:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:44.570 05:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.848 05:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:44.848 05:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.848 05:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.848 05:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.848 05:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.106 05:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:14:45.106 05:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:14:45.674 05:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.674 05:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:14:45.674 05:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.674 05:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.674 05:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.674 05:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:45.674 05:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.674 05:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:45.674 05:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:46.260 05:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:14:46.260 05:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:46.260 05:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:46.260 05:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:46.260 05:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:46.260 05:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.260 05:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.260 05:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.260 05:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.260 05:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.260 05:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.260 05:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.260 05:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.518 00:14:46.518 05:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:46.518 05:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.518 05:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.775 05:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.775 05:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.775 05:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.775 05:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.775 05:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.775 05:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.775 { 00:14:46.775 "cntlid": 25, 00:14:46.775 "qid": 0, 00:14:46.775 "state": "enabled", 00:14:46.775 "thread": "nvmf_tgt_poll_group_000", 00:14:46.775 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:14:46.775 "listen_address": { 00:14:46.775 "trtype": "TCP", 00:14:46.775 "adrfam": "IPv4", 00:14:46.775 "traddr": "10.0.0.3", 00:14:46.775 "trsvcid": "4420" 00:14:46.775 }, 00:14:46.775 "peer_address": { 00:14:46.775 "trtype": "TCP", 00:14:46.775 "adrfam": "IPv4", 00:14:46.775 "traddr": "10.0.0.1", 00:14:46.775 "trsvcid": "49600" 00:14:46.775 }, 00:14:46.775 "auth": { 00:14:46.775 "state": "completed", 00:14:46.775 "digest": "sha256", 00:14:46.775 "dhgroup": "ffdhe4096" 00:14:46.775 } 00:14:46.775 } 00:14:46.775 ]' 00:14:46.775 05:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:47.034 05:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:47.034 05:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:47.034 05:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:47.034 05:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:47.034 05:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.034 05:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.034 05:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.292 05:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:14:47.292 05:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:14:47.860 05:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.860 05:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:14:47.860 05:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.860 05:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.118 05:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.118 05:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:48.118 05:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:48.118 05:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:48.377 05:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:14:48.377 05:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:48.377 05:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:48.377 05:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:48.377 05:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:48.377 05:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.377 05:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.377 05:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.377 05:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.377 05:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.377 05:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.377 05:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.377 05:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.636 00:14:48.896 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.896 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.896 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.154 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.154 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.154 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.154 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.154 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.154 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:49.154 { 00:14:49.154 "cntlid": 27, 00:14:49.154 "qid": 0, 00:14:49.154 "state": "enabled", 00:14:49.154 "thread": "nvmf_tgt_poll_group_000", 00:14:49.154 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:14:49.154 "listen_address": { 00:14:49.154 "trtype": "TCP", 00:14:49.154 "adrfam": "IPv4", 00:14:49.154 "traddr": "10.0.0.3", 00:14:49.154 "trsvcid": "4420" 00:14:49.154 }, 00:14:49.154 "peer_address": { 00:14:49.154 "trtype": "TCP", 00:14:49.154 "adrfam": "IPv4", 00:14:49.154 "traddr": "10.0.0.1", 00:14:49.154 "trsvcid": "49618" 00:14:49.154 }, 00:14:49.154 "auth": { 00:14:49.154 "state": "completed", 00:14:49.154 "digest": "sha256", 00:14:49.154 "dhgroup": "ffdhe4096" 00:14:49.154 } 00:14:49.154 } 00:14:49.155 ]' 00:14:49.155 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:49.155 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:49.155 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:49.414 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:49.414 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:49.414 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.414 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.414 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.672 05:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:14:49.672 05:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:14:50.239 05:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.239 05:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:14:50.239 05:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.239 05:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.239 05:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.239 05:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:50.239 05:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:50.239 05:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:50.806 05:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:14:50.806 05:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:50.806 05:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:50.806 05:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:50.806 05:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:50.806 05:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.806 05:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.806 05:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.806 05:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.806 05:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.806 05:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.806 05:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.806 05:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.064 00:14:51.064 05:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:51.064 05:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:51.064 05:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.324 05:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.324 05:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.324 05:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.324 05:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.324 05:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.324 05:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:51.324 { 00:14:51.324 "cntlid": 29, 00:14:51.324 "qid": 0, 00:14:51.324 "state": "enabled", 00:14:51.324 "thread": "nvmf_tgt_poll_group_000", 00:14:51.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:14:51.324 "listen_address": { 00:14:51.324 "trtype": "TCP", 00:14:51.324 "adrfam": "IPv4", 00:14:51.324 "traddr": "10.0.0.3", 00:14:51.324 "trsvcid": "4420" 00:14:51.324 }, 00:14:51.324 "peer_address": { 00:14:51.324 "trtype": "TCP", 00:14:51.324 "adrfam": "IPv4", 00:14:51.324 "traddr": "10.0.0.1", 00:14:51.324 "trsvcid": "49648" 00:14:51.324 }, 00:14:51.324 "auth": { 00:14:51.324 "state": "completed", 00:14:51.324 "digest": "sha256", 00:14:51.324 "dhgroup": "ffdhe4096" 00:14:51.324 } 00:14:51.324 } 00:14:51.324 ]' 00:14:51.324 05:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:51.324 05:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:51.324 05:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:51.582 05:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:51.582 05:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:51.582 05:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.582 05:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.582 05:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.841 05:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:14:51.841 05:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:14:52.778 05:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.778 05:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:14:52.778 05:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.778 05:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.778 05:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.778 05:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:52.778 05:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:52.778 05:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:53.037 05:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:14:53.037 05:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:53.037 05:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:53.037 05:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:53.037 05:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:53.037 05:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.037 05:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key3 00:14:53.037 05:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.037 05:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.037 05:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.037 05:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:53.037 05:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:53.037 05:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:53.296 00:14:53.296 05:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:53.296 05:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.296 05:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:53.864 05:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.864 05:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.864 05:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.864 05:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.864 05:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.864 05:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:53.864 { 00:14:53.864 "cntlid": 31, 00:14:53.864 "qid": 0, 00:14:53.864 "state": "enabled", 00:14:53.864 "thread": "nvmf_tgt_poll_group_000", 00:14:53.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:14:53.864 "listen_address": { 00:14:53.864 "trtype": "TCP", 00:14:53.864 "adrfam": "IPv4", 00:14:53.864 "traddr": "10.0.0.3", 00:14:53.864 "trsvcid": "4420" 00:14:53.864 }, 00:14:53.864 "peer_address": { 00:14:53.864 "trtype": "TCP", 00:14:53.864 "adrfam": "IPv4", 00:14:53.864 "traddr": "10.0.0.1", 00:14:53.864 "trsvcid": "49664" 00:14:53.864 }, 00:14:53.864 "auth": { 00:14:53.864 "state": "completed", 00:14:53.864 "digest": "sha256", 00:14:53.864 "dhgroup": "ffdhe4096" 00:14:53.864 } 00:14:53.864 } 00:14:53.864 ]' 00:14:53.864 05:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:53.864 05:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:53.864 05:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:53.864 05:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:53.864 05:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:53.864 05:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.864 05:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.864 05:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.432 05:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:14:54.432 05:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:14:54.999 05:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.999 05:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:14:54.999 05:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.999 05:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.999 05:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.999 05:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:54.999 05:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:54.999 05:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:54.999 05:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:55.265 05:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:14:55.265 05:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:55.265 05:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:55.265 05:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:55.265 05:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:55.265 05:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.265 05:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.265 05:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.265 05:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.265 05:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.265 05:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.265 05:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.265 05:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.831 00:14:55.831 05:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:55.831 05:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.831 05:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:56.090 05:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.090 05:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.090 05:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.090 05:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.090 05:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.090 05:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:56.090 { 00:14:56.090 "cntlid": 33, 00:14:56.090 "qid": 0, 00:14:56.090 "state": "enabled", 00:14:56.090 "thread": "nvmf_tgt_poll_group_000", 00:14:56.090 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:14:56.090 "listen_address": { 00:14:56.090 "trtype": "TCP", 00:14:56.090 "adrfam": "IPv4", 00:14:56.090 "traddr": "10.0.0.3", 00:14:56.090 "trsvcid": "4420" 00:14:56.090 }, 00:14:56.090 "peer_address": { 00:14:56.090 "trtype": "TCP", 00:14:56.090 "adrfam": "IPv4", 00:14:56.090 "traddr": "10.0.0.1", 00:14:56.090 "trsvcid": "49698" 00:14:56.090 }, 00:14:56.090 "auth": { 00:14:56.090 "state": "completed", 00:14:56.090 "digest": "sha256", 00:14:56.090 "dhgroup": "ffdhe6144" 00:14:56.090 } 00:14:56.090 } 00:14:56.090 ]' 00:14:56.090 05:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:56.349 05:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:56.349 05:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:56.349 05:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:56.349 05:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:56.349 05:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.349 05:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.349 05:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.608 05:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:14:56.608 05:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:14:57.544 05:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.544 05:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:14:57.544 05:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.544 05:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.544 05:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.544 05:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:57.544 05:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:57.544 05:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:57.802 05:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:14:57.802 05:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:57.802 05:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:57.802 05:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:57.802 05:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:57.802 05:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.802 05:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.802 05:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.802 05:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.802 05:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.802 05:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.802 05:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.802 05:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.370 00:14:58.370 05:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.370 05:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.370 05:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.629 05:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.629 05:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.629 05:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.629 05:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.629 05:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.629 05:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.629 { 00:14:58.629 "cntlid": 35, 00:14:58.629 "qid": 0, 00:14:58.629 "state": "enabled", 00:14:58.629 "thread": "nvmf_tgt_poll_group_000", 00:14:58.629 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:14:58.629 "listen_address": { 00:14:58.629 "trtype": "TCP", 00:14:58.629 "adrfam": "IPv4", 00:14:58.629 "traddr": "10.0.0.3", 00:14:58.629 "trsvcid": "4420" 00:14:58.629 }, 00:14:58.629 "peer_address": { 00:14:58.629 "trtype": "TCP", 00:14:58.629 "adrfam": "IPv4", 00:14:58.629 "traddr": "10.0.0.1", 00:14:58.629 "trsvcid": "51938" 00:14:58.629 }, 00:14:58.629 "auth": { 00:14:58.629 "state": "completed", 00:14:58.629 "digest": "sha256", 00:14:58.629 "dhgroup": "ffdhe6144" 00:14:58.629 } 00:14:58.629 } 00:14:58.629 ]' 00:14:58.629 05:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.629 05:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:58.629 05:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:58.629 05:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:58.630 05:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:58.630 05:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.630 05:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.630 05:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.888 05:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:14:58.888 05:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:14:59.824 05:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.824 05:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:14:59.824 05:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.824 05:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.824 05:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.824 05:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.824 05:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:59.824 05:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:00.083 05:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:00.083 05:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:00.083 05:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:00.083 05:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:00.083 05:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:00.083 05:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.083 05:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.083 05:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.083 05:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.083 05:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.083 05:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.083 05:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.083 05:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.649 00:15:00.650 05:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:00.650 05:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.650 05:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:00.908 05:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.908 05:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.908 05:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.908 05:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.908 05:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.908 05:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.908 { 00:15:00.908 "cntlid": 37, 00:15:00.908 "qid": 0, 00:15:00.908 "state": "enabled", 00:15:00.908 "thread": "nvmf_tgt_poll_group_000", 00:15:00.908 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:15:00.908 "listen_address": { 00:15:00.908 "trtype": "TCP", 00:15:00.908 "adrfam": "IPv4", 00:15:00.908 "traddr": "10.0.0.3", 00:15:00.908 "trsvcid": "4420" 00:15:00.908 }, 00:15:00.908 "peer_address": { 00:15:00.908 "trtype": "TCP", 00:15:00.908 "adrfam": "IPv4", 00:15:00.908 "traddr": "10.0.0.1", 00:15:00.908 "trsvcid": "51954" 00:15:00.908 }, 00:15:00.908 "auth": { 00:15:00.908 "state": "completed", 00:15:00.908 "digest": "sha256", 00:15:00.908 "dhgroup": "ffdhe6144" 00:15:00.908 } 00:15:00.908 } 00:15:00.908 ]' 00:15:00.908 05:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.908 05:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:00.908 05:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.908 05:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:00.908 05:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.908 05:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.908 05:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.908 05:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.474 05:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:15:01.474 05:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:15:02.040 05:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.040 05:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:15:02.040 05:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.040 05:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.040 05:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.040 05:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.040 05:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:02.040 05:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:02.608 05:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:02.608 05:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:02.608 05:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:02.608 05:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:02.608 05:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:02.608 05:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.608 05:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key3 00:15:02.608 05:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.608 05:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.608 05:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.608 05:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:02.608 05:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:02.608 05:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:03.174 00:15:03.174 05:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:03.174 05:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.174 05:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:03.432 05:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.432 05:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.432 05:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.432 05:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.432 05:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.432 05:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.432 { 00:15:03.432 "cntlid": 39, 00:15:03.432 "qid": 0, 00:15:03.432 "state": "enabled", 00:15:03.432 "thread": "nvmf_tgt_poll_group_000", 00:15:03.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:15:03.432 "listen_address": { 00:15:03.432 "trtype": "TCP", 00:15:03.432 "adrfam": "IPv4", 00:15:03.432 "traddr": "10.0.0.3", 00:15:03.432 "trsvcid": "4420" 00:15:03.432 }, 00:15:03.432 "peer_address": { 00:15:03.432 "trtype": "TCP", 00:15:03.432 "adrfam": "IPv4", 00:15:03.432 "traddr": "10.0.0.1", 00:15:03.432 "trsvcid": "51980" 00:15:03.432 }, 00:15:03.432 "auth": { 00:15:03.432 "state": "completed", 00:15:03.432 "digest": "sha256", 00:15:03.432 "dhgroup": "ffdhe6144" 00:15:03.432 } 00:15:03.432 } 00:15:03.432 ]' 00:15:03.432 05:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.432 05:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:03.432 05:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.432 05:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:03.432 05:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.432 05:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.432 05:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.432 05:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.691 05:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:15:03.691 05:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:15:04.625 05:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.625 05:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:15:04.625 05:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.625 05:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.625 05:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.625 05:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:04.625 05:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.625 05:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:04.625 05:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:04.881 05:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:04.881 05:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.881 05:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:04.881 05:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:04.881 05:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:04.881 05:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.881 05:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.881 05:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.881 05:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.881 05:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.881 05:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.881 05:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.881 05:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.813 00:15:05.813 05:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.813 05:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.813 05:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.071 05:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.071 05:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.071 05:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.071 05:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.071 05:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.071 05:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.071 { 00:15:06.071 "cntlid": 41, 00:15:06.071 "qid": 0, 00:15:06.071 "state": "enabled", 00:15:06.071 "thread": "nvmf_tgt_poll_group_000", 00:15:06.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:15:06.071 "listen_address": { 00:15:06.071 "trtype": "TCP", 00:15:06.071 "adrfam": "IPv4", 00:15:06.071 "traddr": "10.0.0.3", 00:15:06.071 "trsvcid": "4420" 00:15:06.071 }, 00:15:06.071 "peer_address": { 00:15:06.071 "trtype": "TCP", 00:15:06.071 "adrfam": "IPv4", 00:15:06.071 "traddr": "10.0.0.1", 00:15:06.071 "trsvcid": "52014" 00:15:06.071 }, 00:15:06.071 "auth": { 00:15:06.071 "state": "completed", 00:15:06.071 "digest": "sha256", 00:15:06.071 "dhgroup": "ffdhe8192" 00:15:06.071 } 00:15:06.071 } 00:15:06.071 ]' 00:15:06.071 05:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.071 05:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:06.071 05:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.071 05:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:06.071 05:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.329 05:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.329 05:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.329 05:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.587 05:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:15:06.587 05:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:15:07.521 05:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.521 05:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:15:07.521 05:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.521 05:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.521 05:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.522 05:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.522 05:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:07.522 05:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:07.780 05:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:07.780 05:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.780 05:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:07.780 05:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:07.780 05:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:07.780 05:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.780 05:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.780 05:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.780 05:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.780 05:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.780 05:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.780 05:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.780 05:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.714 00:15:08.714 05:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.714 05:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.714 05:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.973 05:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.973 05:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.973 05:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.973 05:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.973 05:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.973 05:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.973 { 00:15:08.973 "cntlid": 43, 00:15:08.973 "qid": 0, 00:15:08.973 "state": "enabled", 00:15:08.973 "thread": "nvmf_tgt_poll_group_000", 00:15:08.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:15:08.973 "listen_address": { 00:15:08.973 "trtype": "TCP", 00:15:08.973 "adrfam": "IPv4", 00:15:08.973 "traddr": "10.0.0.3", 00:15:08.973 "trsvcid": "4420" 00:15:08.973 }, 00:15:08.973 "peer_address": { 00:15:08.973 "trtype": "TCP", 00:15:08.973 "adrfam": "IPv4", 00:15:08.973 "traddr": "10.0.0.1", 00:15:08.973 "trsvcid": "52468" 00:15:08.973 }, 00:15:08.973 "auth": { 00:15:08.973 "state": "completed", 00:15:08.973 "digest": "sha256", 00:15:08.973 "dhgroup": "ffdhe8192" 00:15:08.973 } 00:15:08.973 } 00:15:08.973 ]' 00:15:08.973 05:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.973 05:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:08.973 05:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.973 05:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:08.973 05:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.973 05:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.973 05:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.973 05:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.539 05:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:15:09.539 05:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:15:10.106 05:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.106 05:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:15:10.106 05:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.106 05:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.106 05:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.106 05:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.106 05:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:10.106 05:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:10.364 05:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:10.364 05:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.364 05:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:10.364 05:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:10.364 05:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:10.364 05:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.364 05:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.364 05:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.364 05:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.364 05:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.364 05:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.364 05:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.364 05:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.931 00:15:11.204 05:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.204 05:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.204 05:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.540 05:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.540 05:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.540 05:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.540 05:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.540 05:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.540 05:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.540 { 00:15:11.540 "cntlid": 45, 00:15:11.540 "qid": 0, 00:15:11.540 "state": "enabled", 00:15:11.540 "thread": "nvmf_tgt_poll_group_000", 00:15:11.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:15:11.540 "listen_address": { 00:15:11.540 "trtype": "TCP", 00:15:11.540 "adrfam": "IPv4", 00:15:11.540 "traddr": "10.0.0.3", 00:15:11.540 "trsvcid": "4420" 00:15:11.540 }, 00:15:11.540 "peer_address": { 00:15:11.540 "trtype": "TCP", 00:15:11.540 "adrfam": "IPv4", 00:15:11.540 "traddr": "10.0.0.1", 00:15:11.540 "trsvcid": "52512" 00:15:11.540 }, 00:15:11.540 "auth": { 00:15:11.540 "state": "completed", 00:15:11.540 "digest": "sha256", 00:15:11.540 "dhgroup": "ffdhe8192" 00:15:11.540 } 00:15:11.540 } 00:15:11.540 ]' 00:15:11.540 05:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.540 05:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:11.540 05:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.540 05:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:11.540 05:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.540 05:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.540 05:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.540 05:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.107 05:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:15:12.107 05:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:15:12.672 05:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.672 05:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:15:12.672 05:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.672 05:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.672 05:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.672 05:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.672 05:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:12.672 05:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:13.238 05:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:13.238 05:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.238 05:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:13.238 05:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:13.238 05:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:13.238 05:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.238 05:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key3 00:15:13.238 05:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.238 05:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.238 05:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.238 05:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:13.238 05:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:13.238 05:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:13.804 00:15:13.804 05:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.804 05:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.804 05:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.062 05:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.062 05:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.062 05:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.062 05:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.062 05:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.062 05:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.062 { 00:15:14.062 "cntlid": 47, 00:15:14.062 "qid": 0, 00:15:14.062 "state": "enabled", 00:15:14.062 "thread": "nvmf_tgt_poll_group_000", 00:15:14.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:15:14.062 "listen_address": { 00:15:14.062 "trtype": "TCP", 00:15:14.062 "adrfam": "IPv4", 00:15:14.062 "traddr": "10.0.0.3", 00:15:14.062 "trsvcid": "4420" 00:15:14.062 }, 00:15:14.062 "peer_address": { 00:15:14.062 "trtype": "TCP", 00:15:14.062 "adrfam": "IPv4", 00:15:14.062 "traddr": "10.0.0.1", 00:15:14.062 "trsvcid": "52538" 00:15:14.062 }, 00:15:14.062 "auth": { 00:15:14.062 "state": "completed", 00:15:14.062 "digest": "sha256", 00:15:14.062 "dhgroup": "ffdhe8192" 00:15:14.062 } 00:15:14.062 } 00:15:14.062 ]' 00:15:14.062 05:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.321 05:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:14.321 05:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.321 05:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:14.321 05:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.321 05:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.321 05:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.321 05:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.580 05:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:15:14.580 05:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:15:15.513 05:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.514 05:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:15:15.514 05:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.514 05:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.514 05:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.514 05:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:15.514 05:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:15.514 05:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.514 05:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:15.514 05:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:15.772 05:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:15.772 05:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.772 05:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:15.772 05:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:15.772 05:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:15.772 05:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.772 05:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.772 05:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.772 05:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.772 05:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.772 05:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.772 05:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.772 05:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.338 00:15:16.338 05:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.338 05:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.338 05:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.597 05:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.597 05:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.597 05:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.597 05:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.597 05:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.597 05:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.597 { 00:15:16.597 "cntlid": 49, 00:15:16.597 "qid": 0, 00:15:16.597 "state": "enabled", 00:15:16.597 "thread": "nvmf_tgt_poll_group_000", 00:15:16.597 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:15:16.597 "listen_address": { 00:15:16.597 "trtype": "TCP", 00:15:16.597 "adrfam": "IPv4", 00:15:16.597 "traddr": "10.0.0.3", 00:15:16.597 "trsvcid": "4420" 00:15:16.597 }, 00:15:16.597 "peer_address": { 00:15:16.597 "trtype": "TCP", 00:15:16.597 "adrfam": "IPv4", 00:15:16.597 "traddr": "10.0.0.1", 00:15:16.597 "trsvcid": "59008" 00:15:16.597 }, 00:15:16.597 "auth": { 00:15:16.597 "state": "completed", 00:15:16.597 "digest": "sha384", 00:15:16.597 "dhgroup": "null" 00:15:16.597 } 00:15:16.597 } 00:15:16.597 ]' 00:15:16.597 05:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.597 05:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:16.597 05:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.597 05:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:16.597 05:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.855 05:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.855 05:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.855 05:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.113 05:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:15:17.113 05:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:15:17.700 05:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.700 05:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:15:17.700 05:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.700 05:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.700 05:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.700 05:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.700 05:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:17.700 05:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:18.268 05:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:18.268 05:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.268 05:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:18.268 05:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:18.268 05:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:18.268 05:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.268 05:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.268 05:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.268 05:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.268 05:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.268 05:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.268 05:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.268 05:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.526 00:15:18.526 05:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.526 05:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.526 05:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.784 05:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.784 05:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.784 05:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.784 05:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.784 05:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.784 05:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.784 { 00:15:18.784 "cntlid": 51, 00:15:18.784 "qid": 0, 00:15:18.784 "state": "enabled", 00:15:18.784 "thread": "nvmf_tgt_poll_group_000", 00:15:18.784 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:15:18.784 "listen_address": { 00:15:18.784 "trtype": "TCP", 00:15:18.784 "adrfam": "IPv4", 00:15:18.784 "traddr": "10.0.0.3", 00:15:18.784 "trsvcid": "4420" 00:15:18.784 }, 00:15:18.784 "peer_address": { 00:15:18.784 "trtype": "TCP", 00:15:18.784 "adrfam": "IPv4", 00:15:18.784 "traddr": "10.0.0.1", 00:15:18.784 "trsvcid": "59030" 00:15:18.784 }, 00:15:18.784 "auth": { 00:15:18.784 "state": "completed", 00:15:18.784 "digest": "sha384", 00:15:18.784 "dhgroup": "null" 00:15:18.784 } 00:15:18.784 } 00:15:18.784 ]' 00:15:18.784 05:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.785 05:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:18.785 05:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.043 05:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:19.043 05:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.043 05:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.043 05:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.043 05:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.301 05:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:15:19.301 05:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:15:20.245 05:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.245 05:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:15:20.245 05:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.245 05:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.245 05:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.245 05:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.245 05:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:20.245 05:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:20.245 05:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:20.245 05:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.245 05:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:20.245 05:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:20.245 05:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:20.245 05:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.245 05:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.245 05:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.245 05:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.245 05:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.507 05:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.507 05:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.507 05:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.765 00:15:20.765 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.765 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.765 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.023 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.023 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.023 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.023 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.023 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.023 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:21.023 { 00:15:21.023 "cntlid": 53, 00:15:21.023 "qid": 0, 00:15:21.023 "state": "enabled", 00:15:21.023 "thread": "nvmf_tgt_poll_group_000", 00:15:21.023 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:15:21.023 "listen_address": { 00:15:21.023 "trtype": "TCP", 00:15:21.023 "adrfam": "IPv4", 00:15:21.023 "traddr": "10.0.0.3", 00:15:21.023 "trsvcid": "4420" 00:15:21.023 }, 00:15:21.023 "peer_address": { 00:15:21.023 "trtype": "TCP", 00:15:21.023 "adrfam": "IPv4", 00:15:21.023 "traddr": "10.0.0.1", 00:15:21.023 "trsvcid": "59050" 00:15:21.023 }, 00:15:21.023 "auth": { 00:15:21.023 "state": "completed", 00:15:21.023 "digest": "sha384", 00:15:21.023 "dhgroup": "null" 00:15:21.023 } 00:15:21.023 } 00:15:21.023 ]' 00:15:21.023 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:21.281 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:21.281 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:21.281 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:21.281 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:21.281 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.281 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.282 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.540 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:15:21.540 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:15:22.107 05:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.107 05:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:15:22.107 05:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.107 05:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.107 05:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.107 05:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:22.107 05:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:22.107 05:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:22.673 05:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:22.673 05:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.673 05:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:22.673 05:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:22.673 05:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:22.673 05:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.673 05:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key3 00:15:22.673 05:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.673 05:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.673 05:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.673 05:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:22.673 05:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:22.673 05:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:22.931 00:15:22.931 05:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.931 05:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.931 05:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.189 05:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.189 05:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.189 05:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.189 05:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.189 05:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.189 05:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:23.189 { 00:15:23.189 "cntlid": 55, 00:15:23.189 "qid": 0, 00:15:23.189 "state": "enabled", 00:15:23.189 "thread": "nvmf_tgt_poll_group_000", 00:15:23.189 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:15:23.189 "listen_address": { 00:15:23.189 "trtype": "TCP", 00:15:23.189 "adrfam": "IPv4", 00:15:23.189 "traddr": "10.0.0.3", 00:15:23.189 "trsvcid": "4420" 00:15:23.189 }, 00:15:23.189 "peer_address": { 00:15:23.189 "trtype": "TCP", 00:15:23.189 "adrfam": "IPv4", 00:15:23.189 "traddr": "10.0.0.1", 00:15:23.189 "trsvcid": "59068" 00:15:23.189 }, 00:15:23.189 "auth": { 00:15:23.189 "state": "completed", 00:15:23.189 "digest": "sha384", 00:15:23.189 "dhgroup": "null" 00:15:23.189 } 00:15:23.189 } 00:15:23.189 ]' 00:15:23.189 05:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:23.189 05:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:23.189 05:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:23.189 05:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:23.189 05:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:23.447 05:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.447 05:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.447 05:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.705 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:15:23.705 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:15:24.272 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.272 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:15:24.273 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.273 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.273 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.273 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:24.273 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:24.273 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:24.273 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:24.838 05:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:24.838 05:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.838 05:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:24.838 05:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:24.838 05:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:24.838 05:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.838 05:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.838 05:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.838 05:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.838 05:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.839 05:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.839 05:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.839 05:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.096 00:15:25.096 05:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.096 05:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.096 05:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.354 05:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.354 05:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.354 05:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.354 05:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.354 05:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.354 05:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:25.354 { 00:15:25.354 "cntlid": 57, 00:15:25.354 "qid": 0, 00:15:25.354 "state": "enabled", 00:15:25.354 "thread": "nvmf_tgt_poll_group_000", 00:15:25.354 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:15:25.354 "listen_address": { 00:15:25.354 "trtype": "TCP", 00:15:25.354 "adrfam": "IPv4", 00:15:25.354 "traddr": "10.0.0.3", 00:15:25.354 "trsvcid": "4420" 00:15:25.354 }, 00:15:25.354 "peer_address": { 00:15:25.354 "trtype": "TCP", 00:15:25.354 "adrfam": "IPv4", 00:15:25.354 "traddr": "10.0.0.1", 00:15:25.354 "trsvcid": "59106" 00:15:25.354 }, 00:15:25.354 "auth": { 00:15:25.354 "state": "completed", 00:15:25.354 "digest": "sha384", 00:15:25.354 "dhgroup": "ffdhe2048" 00:15:25.354 } 00:15:25.354 } 00:15:25.354 ]' 00:15:25.354 05:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:25.354 05:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:25.354 05:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:25.354 05:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:25.354 05:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:25.613 05:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.613 05:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.613 05:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.871 05:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:15:25.871 05:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:15:26.438 05:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.438 05:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:15:26.438 05:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.438 05:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.438 05:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.438 05:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.438 05:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:26.438 05:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:26.695 05:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:26.695 05:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.695 05:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:26.695 05:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:26.695 05:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:26.953 05:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.953 05:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.953 05:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.953 05:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.953 05:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.953 05:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.953 05:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.953 05:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.211 00:15:27.211 05:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.211 05:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.211 05:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.470 05:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.470 05:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.470 05:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.470 05:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.470 05:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.470 05:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.470 { 00:15:27.470 "cntlid": 59, 00:15:27.470 "qid": 0, 00:15:27.470 "state": "enabled", 00:15:27.470 "thread": "nvmf_tgt_poll_group_000", 00:15:27.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:15:27.470 "listen_address": { 00:15:27.470 "trtype": "TCP", 00:15:27.470 "adrfam": "IPv4", 00:15:27.470 "traddr": "10.0.0.3", 00:15:27.470 "trsvcid": "4420" 00:15:27.470 }, 00:15:27.470 "peer_address": { 00:15:27.470 "trtype": "TCP", 00:15:27.470 "adrfam": "IPv4", 00:15:27.470 "traddr": "10.0.0.1", 00:15:27.470 "trsvcid": "50628" 00:15:27.470 }, 00:15:27.470 "auth": { 00:15:27.470 "state": "completed", 00:15:27.470 "digest": "sha384", 00:15:27.470 "dhgroup": "ffdhe2048" 00:15:27.470 } 00:15:27.470 } 00:15:27.470 ]' 00:15:27.470 05:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.470 05:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:27.470 05:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.728 05:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:27.728 05:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.728 05:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.728 05:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.728 05:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.986 05:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:15:27.986 05:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:15:28.919 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.919 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:15:28.919 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.919 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.919 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.919 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.919 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:28.919 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:29.176 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:29.176 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.176 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:29.176 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:29.176 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:29.176 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.176 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.176 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.176 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.176 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.176 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.176 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.176 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.434 00:15:29.434 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.434 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.434 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.692 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.692 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.692 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.692 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.692 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.692 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.692 { 00:15:29.692 "cntlid": 61, 00:15:29.692 "qid": 0, 00:15:29.692 "state": "enabled", 00:15:29.692 "thread": "nvmf_tgt_poll_group_000", 00:15:29.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:15:29.692 "listen_address": { 00:15:29.692 "trtype": "TCP", 00:15:29.692 "adrfam": "IPv4", 00:15:29.692 "traddr": "10.0.0.3", 00:15:29.692 "trsvcid": "4420" 00:15:29.692 }, 00:15:29.692 "peer_address": { 00:15:29.692 "trtype": "TCP", 00:15:29.692 "adrfam": "IPv4", 00:15:29.692 "traddr": "10.0.0.1", 00:15:29.692 "trsvcid": "50660" 00:15:29.692 }, 00:15:29.692 "auth": { 00:15:29.692 "state": "completed", 00:15:29.692 "digest": "sha384", 00:15:29.692 "dhgroup": "ffdhe2048" 00:15:29.692 } 00:15:29.692 } 00:15:29.692 ]' 00:15:29.692 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.950 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:29.950 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.950 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:29.950 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.950 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.950 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.950 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.208 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:15:30.208 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:15:31.141 05:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.141 05:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:15:31.141 05:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.141 05:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.141 05:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.141 05:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.141 05:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:31.141 05:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:31.400 05:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:31.400 05:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.400 05:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:31.400 05:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:31.400 05:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:31.400 05:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.400 05:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key3 00:15:31.400 05:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.400 05:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.400 05:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.400 05:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:31.400 05:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:31.400 05:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:31.658 00:15:31.658 05:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:31.658 05:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:31.658 05:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.223 05:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.223 05:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.223 05:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.223 05:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.223 05:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.223 05:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.223 { 00:15:32.223 "cntlid": 63, 00:15:32.223 "qid": 0, 00:15:32.223 "state": "enabled", 00:15:32.223 "thread": "nvmf_tgt_poll_group_000", 00:15:32.223 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:15:32.223 "listen_address": { 00:15:32.223 "trtype": "TCP", 00:15:32.223 "adrfam": "IPv4", 00:15:32.223 "traddr": "10.0.0.3", 00:15:32.223 "trsvcid": "4420" 00:15:32.223 }, 00:15:32.223 "peer_address": { 00:15:32.223 "trtype": "TCP", 00:15:32.223 "adrfam": "IPv4", 00:15:32.223 "traddr": "10.0.0.1", 00:15:32.223 "trsvcid": "50680" 00:15:32.223 }, 00:15:32.223 "auth": { 00:15:32.223 "state": "completed", 00:15:32.223 "digest": "sha384", 00:15:32.223 "dhgroup": "ffdhe2048" 00:15:32.223 } 00:15:32.223 } 00:15:32.223 ]' 00:15:32.223 05:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.223 05:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:32.223 05:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.223 05:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:32.223 05:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.223 05:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.223 05:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.223 05:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.481 05:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:15:32.481 05:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:15:33.415 05:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.415 05:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:15:33.415 05:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.415 05:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.415 05:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.415 05:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:33.415 05:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.415 05:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:33.415 05:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:33.673 05:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:33.673 05:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.673 05:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:33.673 05:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:33.673 05:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:33.673 05:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.673 05:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.673 05:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.673 05:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.673 05:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.673 05:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.673 05:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.673 05:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.931 00:15:33.931 05:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.931 05:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.931 05:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.497 05:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.497 05:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.497 05:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.497 05:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.497 05:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.497 05:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.497 { 00:15:34.497 "cntlid": 65, 00:15:34.497 "qid": 0, 00:15:34.497 "state": "enabled", 00:15:34.497 "thread": "nvmf_tgt_poll_group_000", 00:15:34.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:15:34.497 "listen_address": { 00:15:34.497 "trtype": "TCP", 00:15:34.497 "adrfam": "IPv4", 00:15:34.497 "traddr": "10.0.0.3", 00:15:34.497 "trsvcid": "4420" 00:15:34.497 }, 00:15:34.497 "peer_address": { 00:15:34.497 "trtype": "TCP", 00:15:34.497 "adrfam": "IPv4", 00:15:34.497 "traddr": "10.0.0.1", 00:15:34.497 "trsvcid": "50694" 00:15:34.497 }, 00:15:34.497 "auth": { 00:15:34.497 "state": "completed", 00:15:34.497 "digest": "sha384", 00:15:34.497 "dhgroup": "ffdhe3072" 00:15:34.497 } 00:15:34.497 } 00:15:34.497 ]' 00:15:34.497 05:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.497 05:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:34.497 05:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.497 05:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:34.497 05:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.497 05:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.497 05:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.497 05:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.068 05:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:15:35.068 05:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:15:35.654 05:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.654 05:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:15:35.654 05:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.654 05:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.654 05:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.654 05:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.654 05:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:35.654 05:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:35.912 05:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:35.912 05:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.912 05:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:35.912 05:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:35.912 05:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:35.912 05:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.912 05:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.912 05:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.912 05:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.912 05:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.912 05:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.912 05:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.912 05:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.479 00:15:36.479 05:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.479 05:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.479 05:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.738 05:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.738 05:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.738 05:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.738 05:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.738 05:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.738 05:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.738 { 00:15:36.738 "cntlid": 67, 00:15:36.738 "qid": 0, 00:15:36.738 "state": "enabled", 00:15:36.738 "thread": "nvmf_tgt_poll_group_000", 00:15:36.738 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:15:36.738 "listen_address": { 00:15:36.738 "trtype": "TCP", 00:15:36.738 "adrfam": "IPv4", 00:15:36.738 "traddr": "10.0.0.3", 00:15:36.738 "trsvcid": "4420" 00:15:36.738 }, 00:15:36.738 "peer_address": { 00:15:36.738 "trtype": "TCP", 00:15:36.738 "adrfam": "IPv4", 00:15:36.738 "traddr": "10.0.0.1", 00:15:36.738 "trsvcid": "36094" 00:15:36.738 }, 00:15:36.738 "auth": { 00:15:36.738 "state": "completed", 00:15:36.738 "digest": "sha384", 00:15:36.738 "dhgroup": "ffdhe3072" 00:15:36.738 } 00:15:36.738 } 00:15:36.738 ]' 00:15:36.738 05:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.738 05:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:36.738 05:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.738 05:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:36.738 05:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.738 05:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.738 05:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.738 05:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.306 05:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:15:37.306 05:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:15:37.875 05:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.875 05:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:15:37.875 05:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.875 05:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.875 05:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.875 05:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.875 05:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:37.875 05:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:38.134 05:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:38.134 05:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.134 05:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:38.134 05:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:38.134 05:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:38.134 05:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.134 05:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.134 05:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.134 05:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.134 05:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.134 05:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.134 05:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.134 05:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.701 00:15:38.701 05:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.701 05:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.701 05:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.960 05:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.960 05:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.960 05:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.960 05:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.960 05:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.960 05:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.960 { 00:15:38.960 "cntlid": 69, 00:15:38.960 "qid": 0, 00:15:38.960 "state": "enabled", 00:15:38.960 "thread": "nvmf_tgt_poll_group_000", 00:15:38.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:15:38.960 "listen_address": { 00:15:38.960 "trtype": "TCP", 00:15:38.960 "adrfam": "IPv4", 00:15:38.960 "traddr": "10.0.0.3", 00:15:38.960 "trsvcid": "4420" 00:15:38.960 }, 00:15:38.960 "peer_address": { 00:15:38.960 "trtype": "TCP", 00:15:38.960 "adrfam": "IPv4", 00:15:38.960 "traddr": "10.0.0.1", 00:15:38.960 "trsvcid": "36134" 00:15:38.960 }, 00:15:38.960 "auth": { 00:15:38.960 "state": "completed", 00:15:38.960 "digest": "sha384", 00:15:38.960 "dhgroup": "ffdhe3072" 00:15:38.960 } 00:15:38.960 } 00:15:38.960 ]' 00:15:38.960 05:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.960 05:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:38.960 05:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.960 05:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:38.960 05:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.960 05:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.960 05:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.960 05:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.526 05:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:15:39.526 05:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:15:40.091 05:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.091 05:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:15:40.091 05:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.091 05:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.092 05:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.092 05:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.092 05:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:40.092 05:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:40.348 05:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:40.348 05:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.348 05:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:40.348 05:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:40.348 05:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:40.348 05:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.348 05:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key3 00:15:40.348 05:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.348 05:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.348 05:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.348 05:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:40.348 05:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:40.348 05:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:40.606 00:15:40.606 05:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.606 05:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.606 05:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.865 05:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.865 05:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.865 05:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.865 05:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.865 05:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.865 05:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.865 { 00:15:40.865 "cntlid": 71, 00:15:40.865 "qid": 0, 00:15:40.865 "state": "enabled", 00:15:40.865 "thread": "nvmf_tgt_poll_group_000", 00:15:40.865 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:15:40.865 "listen_address": { 00:15:40.865 "trtype": "TCP", 00:15:40.865 "adrfam": "IPv4", 00:15:40.865 "traddr": "10.0.0.3", 00:15:40.865 "trsvcid": "4420" 00:15:40.865 }, 00:15:40.865 "peer_address": { 00:15:40.865 "trtype": "TCP", 00:15:40.865 "adrfam": "IPv4", 00:15:40.865 "traddr": "10.0.0.1", 00:15:40.865 "trsvcid": "36160" 00:15:40.865 }, 00:15:40.865 "auth": { 00:15:40.865 "state": "completed", 00:15:40.865 "digest": "sha384", 00:15:40.865 "dhgroup": "ffdhe3072" 00:15:40.865 } 00:15:40.865 } 00:15:40.865 ]' 00:15:40.865 05:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:41.123 05:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:41.123 05:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.123 05:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:41.123 05:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.123 05:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.123 05:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.123 05:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.381 05:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:15:41.381 05:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:15:42.097 05:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.097 05:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:15:42.097 05:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.097 05:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.097 05:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.097 05:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:42.097 05:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.097 05:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:42.097 05:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:42.662 05:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:42.662 05:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.662 05:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:42.662 05:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:42.662 05:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:42.662 05:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.662 05:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.662 05:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.662 05:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.662 05:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.662 05:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.662 05:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.662 05:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.920 00:15:42.920 05:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.920 05:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.920 05:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.487 05:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.487 05:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.487 05:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.487 05:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.487 05:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.487 05:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.487 { 00:15:43.487 "cntlid": 73, 00:15:43.487 "qid": 0, 00:15:43.487 "state": "enabled", 00:15:43.487 "thread": "nvmf_tgt_poll_group_000", 00:15:43.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:15:43.487 "listen_address": { 00:15:43.487 "trtype": "TCP", 00:15:43.487 "adrfam": "IPv4", 00:15:43.487 "traddr": "10.0.0.3", 00:15:43.487 "trsvcid": "4420" 00:15:43.487 }, 00:15:43.487 "peer_address": { 00:15:43.487 "trtype": "TCP", 00:15:43.487 "adrfam": "IPv4", 00:15:43.487 "traddr": "10.0.0.1", 00:15:43.487 "trsvcid": "36196" 00:15:43.487 }, 00:15:43.487 "auth": { 00:15:43.487 "state": "completed", 00:15:43.487 "digest": "sha384", 00:15:43.487 "dhgroup": "ffdhe4096" 00:15:43.487 } 00:15:43.487 } 00:15:43.487 ]' 00:15:43.487 05:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.487 05:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:43.487 05:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.487 05:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:43.487 05:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.487 05:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.487 05:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.487 05:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.745 05:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:15:43.745 05:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:15:44.682 05:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.682 05:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:15:44.682 05:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.682 05:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.682 05:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.682 05:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.682 05:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:44.682 05:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:44.939 05:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:15:44.939 05:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.939 05:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:44.939 05:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:44.939 05:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:44.939 05:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.939 05:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.939 05:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.939 05:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.939 05:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.939 05:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.939 05:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.939 05:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.197 00:15:45.197 05:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.197 05:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.197 05:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.456 05:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.456 05:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.456 05:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.456 05:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.715 05:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.715 05:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.715 { 00:15:45.715 "cntlid": 75, 00:15:45.715 "qid": 0, 00:15:45.715 "state": "enabled", 00:15:45.715 "thread": "nvmf_tgt_poll_group_000", 00:15:45.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:15:45.715 "listen_address": { 00:15:45.715 "trtype": "TCP", 00:15:45.715 "adrfam": "IPv4", 00:15:45.715 "traddr": "10.0.0.3", 00:15:45.715 "trsvcid": "4420" 00:15:45.715 }, 00:15:45.715 "peer_address": { 00:15:45.715 "trtype": "TCP", 00:15:45.715 "adrfam": "IPv4", 00:15:45.715 "traddr": "10.0.0.1", 00:15:45.715 "trsvcid": "36236" 00:15:45.716 }, 00:15:45.716 "auth": { 00:15:45.716 "state": "completed", 00:15:45.716 "digest": "sha384", 00:15:45.716 "dhgroup": "ffdhe4096" 00:15:45.716 } 00:15:45.716 } 00:15:45.716 ]' 00:15:45.716 05:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.716 05:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:45.716 05:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.716 05:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:45.716 05:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.716 05:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.716 05:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.716 05:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.974 05:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:15:45.974 05:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:15:46.909 05:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.909 05:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:15:46.909 05:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.909 05:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.909 05:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.909 05:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.909 05:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:46.909 05:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:47.167 05:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:47.167 05:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.167 05:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:47.167 05:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:47.167 05:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:47.167 05:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.167 05:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.167 05:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.167 05:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.167 05:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.167 05:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.167 05:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.167 05:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.426 00:15:47.426 05:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.426 05:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.426 05:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.684 05:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.684 05:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.684 05:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.684 05:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.684 05:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.942 05:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.942 { 00:15:47.942 "cntlid": 77, 00:15:47.942 "qid": 0, 00:15:47.942 "state": "enabled", 00:15:47.942 "thread": "nvmf_tgt_poll_group_000", 00:15:47.942 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:15:47.942 "listen_address": { 00:15:47.942 "trtype": "TCP", 00:15:47.942 "adrfam": "IPv4", 00:15:47.942 "traddr": "10.0.0.3", 00:15:47.942 "trsvcid": "4420" 00:15:47.942 }, 00:15:47.942 "peer_address": { 00:15:47.942 "trtype": "TCP", 00:15:47.942 "adrfam": "IPv4", 00:15:47.942 "traddr": "10.0.0.1", 00:15:47.942 "trsvcid": "46692" 00:15:47.942 }, 00:15:47.942 "auth": { 00:15:47.942 "state": "completed", 00:15:47.942 "digest": "sha384", 00:15:47.942 "dhgroup": "ffdhe4096" 00:15:47.942 } 00:15:47.942 } 00:15:47.942 ]' 00:15:47.942 05:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.942 05:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:47.942 05:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.942 05:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:47.942 05:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.942 05:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.942 05:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.942 05:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.200 05:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:15:48.200 05:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:15:49.134 05:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.134 05:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:15:49.134 05:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.134 05:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.134 05:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.134 05:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.134 05:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:49.134 05:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:49.392 05:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:15:49.392 05:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.392 05:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:49.392 05:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:49.392 05:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:49.392 05:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.392 05:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key3 00:15:49.392 05:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.392 05:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.392 05:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.392 05:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:49.392 05:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:49.392 05:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:49.651 00:15:49.651 05:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.651 05:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.651 05:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.909 05:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.909 05:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.909 05:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.909 05:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.909 05:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.909 05:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.909 { 00:15:49.909 "cntlid": 79, 00:15:49.909 "qid": 0, 00:15:49.909 "state": "enabled", 00:15:49.909 "thread": "nvmf_tgt_poll_group_000", 00:15:49.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:15:49.909 "listen_address": { 00:15:49.909 "trtype": "TCP", 00:15:49.909 "adrfam": "IPv4", 00:15:49.909 "traddr": "10.0.0.3", 00:15:49.909 "trsvcid": "4420" 00:15:49.909 }, 00:15:49.909 "peer_address": { 00:15:49.909 "trtype": "TCP", 00:15:49.909 "adrfam": "IPv4", 00:15:49.909 "traddr": "10.0.0.1", 00:15:49.909 "trsvcid": "46704" 00:15:49.909 }, 00:15:49.909 "auth": { 00:15:49.909 "state": "completed", 00:15:49.909 "digest": "sha384", 00:15:49.909 "dhgroup": "ffdhe4096" 00:15:49.909 } 00:15:49.909 } 00:15:49.909 ]' 00:15:49.909 05:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.182 05:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:50.182 05:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.182 05:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:50.182 05:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.182 05:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.182 05:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.182 05:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.440 05:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:15:50.440 05:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:15:51.007 05:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.007 05:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:15:51.007 05:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.007 05:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.007 05:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.007 05:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:51.007 05:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.007 05:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:51.007 05:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:51.265 05:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:15:51.265 05:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.265 05:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:51.265 05:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:51.265 05:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:51.265 05:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.265 05:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.266 05:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.266 05:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.266 05:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.266 05:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.266 05:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.266 05:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.832 00:15:51.832 05:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.832 05:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.832 05:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.398 05:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.398 05:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.398 05:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.398 05:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.398 05:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.398 05:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.398 { 00:15:52.398 "cntlid": 81, 00:15:52.398 "qid": 0, 00:15:52.398 "state": "enabled", 00:15:52.398 "thread": "nvmf_tgt_poll_group_000", 00:15:52.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:15:52.398 "listen_address": { 00:15:52.398 "trtype": "TCP", 00:15:52.398 "adrfam": "IPv4", 00:15:52.398 "traddr": "10.0.0.3", 00:15:52.398 "trsvcid": "4420" 00:15:52.398 }, 00:15:52.398 "peer_address": { 00:15:52.398 "trtype": "TCP", 00:15:52.398 "adrfam": "IPv4", 00:15:52.398 "traddr": "10.0.0.1", 00:15:52.398 "trsvcid": "46728" 00:15:52.398 }, 00:15:52.398 "auth": { 00:15:52.398 "state": "completed", 00:15:52.398 "digest": "sha384", 00:15:52.398 "dhgroup": "ffdhe6144" 00:15:52.398 } 00:15:52.398 } 00:15:52.398 ]' 00:15:52.398 05:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.398 05:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:52.398 05:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.398 05:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:52.398 05:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.398 05:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.398 05:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.398 05:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.964 05:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:15:52.964 05:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:15:53.529 05:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.530 05:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:15:53.530 05:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.530 05:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.530 05:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.530 05:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.530 05:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:53.530 05:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:53.788 05:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:15:53.788 05:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.788 05:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:53.788 05:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:53.788 05:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:53.788 05:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.788 05:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.788 05:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.788 05:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.788 05:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.788 05:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.788 05:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.788 05:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.354 00:15:54.354 05:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.354 05:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.354 05:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.612 05:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.612 05:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.612 05:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.612 05:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.612 05:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.612 05:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.612 { 00:15:54.612 "cntlid": 83, 00:15:54.612 "qid": 0, 00:15:54.612 "state": "enabled", 00:15:54.612 "thread": "nvmf_tgt_poll_group_000", 00:15:54.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:15:54.612 "listen_address": { 00:15:54.612 "trtype": "TCP", 00:15:54.612 "adrfam": "IPv4", 00:15:54.612 "traddr": "10.0.0.3", 00:15:54.612 "trsvcid": "4420" 00:15:54.612 }, 00:15:54.612 "peer_address": { 00:15:54.612 "trtype": "TCP", 00:15:54.612 "adrfam": "IPv4", 00:15:54.612 "traddr": "10.0.0.1", 00:15:54.612 "trsvcid": "46760" 00:15:54.612 }, 00:15:54.612 "auth": { 00:15:54.612 "state": "completed", 00:15:54.612 "digest": "sha384", 00:15:54.612 "dhgroup": "ffdhe6144" 00:15:54.612 } 00:15:54.612 } 00:15:54.612 ]' 00:15:54.612 05:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.612 05:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:54.612 05:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.612 05:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:54.612 05:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.875 05:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.875 05:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.875 05:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.134 05:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:15:55.134 05:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:15:55.700 05:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.700 05:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:15:55.700 05:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.700 05:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.700 05:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.700 05:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.700 05:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:55.700 05:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:56.268 05:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:15:56.268 05:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.268 05:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:56.268 05:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:56.268 05:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:56.268 05:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.268 05:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.268 05:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.268 05:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.268 05:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.268 05:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.268 05:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.268 05:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.527 00:15:56.527 05:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.527 05:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.527 05:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.892 05:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.892 05:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.892 05:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.892 05:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.892 05:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.892 05:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.892 { 00:15:56.892 "cntlid": 85, 00:15:56.892 "qid": 0, 00:15:56.892 "state": "enabled", 00:15:56.892 "thread": "nvmf_tgt_poll_group_000", 00:15:56.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:15:56.892 "listen_address": { 00:15:56.892 "trtype": "TCP", 00:15:56.892 "adrfam": "IPv4", 00:15:56.892 "traddr": "10.0.0.3", 00:15:56.892 "trsvcid": "4420" 00:15:56.892 }, 00:15:56.892 "peer_address": { 00:15:56.892 "trtype": "TCP", 00:15:56.892 "adrfam": "IPv4", 00:15:56.892 "traddr": "10.0.0.1", 00:15:56.892 "trsvcid": "50902" 00:15:56.892 }, 00:15:56.892 "auth": { 00:15:56.892 "state": "completed", 00:15:56.892 "digest": "sha384", 00:15:56.892 "dhgroup": "ffdhe6144" 00:15:56.892 } 00:15:56.892 } 00:15:56.892 ]' 00:15:56.892 05:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.151 05:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:57.151 05:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.151 05:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:57.151 05:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.151 05:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.151 05:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.151 05:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.410 05:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:15:57.410 05:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:15:58.345 05:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.345 05:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:15:58.345 05:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.345 05:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.346 05:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.346 05:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.346 05:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:58.346 05:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:58.604 05:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:15:58.604 05:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.604 05:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:58.604 05:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:58.604 05:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:58.604 05:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.604 05:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key3 00:15:58.604 05:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.604 05:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.604 05:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.604 05:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:58.604 05:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:58.604 05:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:59.171 00:15:59.171 05:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.171 05:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.171 05:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.429 05:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.429 05:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.429 05:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.429 05:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.429 05:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.429 05:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.429 { 00:15:59.429 "cntlid": 87, 00:15:59.429 "qid": 0, 00:15:59.429 "state": "enabled", 00:15:59.429 "thread": "nvmf_tgt_poll_group_000", 00:15:59.429 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:15:59.429 "listen_address": { 00:15:59.429 "trtype": "TCP", 00:15:59.429 "adrfam": "IPv4", 00:15:59.429 "traddr": "10.0.0.3", 00:15:59.429 "trsvcid": "4420" 00:15:59.429 }, 00:15:59.429 "peer_address": { 00:15:59.429 "trtype": "TCP", 00:15:59.429 "adrfam": "IPv4", 00:15:59.429 "traddr": "10.0.0.1", 00:15:59.429 "trsvcid": "50932" 00:15:59.429 }, 00:15:59.429 "auth": { 00:15:59.429 "state": "completed", 00:15:59.429 "digest": "sha384", 00:15:59.429 "dhgroup": "ffdhe6144" 00:15:59.429 } 00:15:59.429 } 00:15:59.429 ]' 00:15:59.429 05:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.429 05:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.429 05:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.429 05:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:59.429 05:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.429 05:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.429 05:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.429 05:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.688 05:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:15:59.688 05:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:16:00.623 05:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.624 05:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:16:00.624 05:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.624 05:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.624 05:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.624 05:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:00.624 05:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.624 05:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:00.624 05:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:00.624 05:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:00.624 05:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.624 05:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:00.624 05:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:00.624 05:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:00.624 05:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.624 05:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.624 05:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.624 05:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.624 05:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.624 05:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.624 05:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.624 05:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.558 00:16:01.559 05:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.559 05:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.559 05:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.817 05:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.817 05:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.817 05:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.817 05:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.817 05:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.817 05:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.817 { 00:16:01.817 "cntlid": 89, 00:16:01.817 "qid": 0, 00:16:01.817 "state": "enabled", 00:16:01.817 "thread": "nvmf_tgt_poll_group_000", 00:16:01.817 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:16:01.817 "listen_address": { 00:16:01.817 "trtype": "TCP", 00:16:01.817 "adrfam": "IPv4", 00:16:01.817 "traddr": "10.0.0.3", 00:16:01.817 "trsvcid": "4420" 00:16:01.817 }, 00:16:01.817 "peer_address": { 00:16:01.817 "trtype": "TCP", 00:16:01.817 "adrfam": "IPv4", 00:16:01.817 "traddr": "10.0.0.1", 00:16:01.817 "trsvcid": "50964" 00:16:01.817 }, 00:16:01.817 "auth": { 00:16:01.817 "state": "completed", 00:16:01.817 "digest": "sha384", 00:16:01.817 "dhgroup": "ffdhe8192" 00:16:01.817 } 00:16:01.817 } 00:16:01.817 ]' 00:16:01.817 05:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.817 05:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:01.817 05:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.817 05:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:01.817 05:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.817 05:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.817 05:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.817 05:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.383 05:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:16:02.383 05:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:16:02.950 05:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.208 05:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:16:03.208 05:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.208 05:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.208 05:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.208 05:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.208 05:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:03.208 05:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:03.466 05:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:03.467 05:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.467 05:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:03.467 05:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:03.467 05:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:03.467 05:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.467 05:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.467 05:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.467 05:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.467 05:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.467 05:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.467 05:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.467 05:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.040 00:16:04.040 05:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.040 05:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.040 05:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.349 05:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.349 05:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.349 05:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.349 05:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.349 05:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.349 05:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.349 { 00:16:04.349 "cntlid": 91, 00:16:04.349 "qid": 0, 00:16:04.349 "state": "enabled", 00:16:04.349 "thread": "nvmf_tgt_poll_group_000", 00:16:04.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:16:04.349 "listen_address": { 00:16:04.349 "trtype": "TCP", 00:16:04.349 "adrfam": "IPv4", 00:16:04.349 "traddr": "10.0.0.3", 00:16:04.349 "trsvcid": "4420" 00:16:04.349 }, 00:16:04.349 "peer_address": { 00:16:04.349 "trtype": "TCP", 00:16:04.349 "adrfam": "IPv4", 00:16:04.349 "traddr": "10.0.0.1", 00:16:04.349 "trsvcid": "50996" 00:16:04.349 }, 00:16:04.349 "auth": { 00:16:04.349 "state": "completed", 00:16:04.349 "digest": "sha384", 00:16:04.349 "dhgroup": "ffdhe8192" 00:16:04.349 } 00:16:04.349 } 00:16:04.349 ]' 00:16:04.349 05:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.349 05:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:04.349 05:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.607 05:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:04.607 05:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.607 05:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.607 05:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.607 05:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.865 05:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:16:04.865 05:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:16:05.800 05:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.800 05:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:16:05.800 05:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.800 05:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.800 05:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.800 05:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.800 05:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:05.800 05:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:06.058 05:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:06.058 05:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.058 05:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:06.058 05:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:06.058 05:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:06.058 05:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.058 05:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.058 05:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.058 05:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.058 05:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.058 05:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.058 05:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.058 05:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.993 00:16:06.993 05:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.993 05:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.993 05:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.252 05:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.252 05:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.252 05:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.252 05:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.252 05:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.252 05:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.252 { 00:16:07.252 "cntlid": 93, 00:16:07.252 "qid": 0, 00:16:07.252 "state": "enabled", 00:16:07.252 "thread": "nvmf_tgt_poll_group_000", 00:16:07.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:16:07.252 "listen_address": { 00:16:07.252 "trtype": "TCP", 00:16:07.252 "adrfam": "IPv4", 00:16:07.252 "traddr": "10.0.0.3", 00:16:07.252 "trsvcid": "4420" 00:16:07.252 }, 00:16:07.252 "peer_address": { 00:16:07.252 "trtype": "TCP", 00:16:07.252 "adrfam": "IPv4", 00:16:07.252 "traddr": "10.0.0.1", 00:16:07.252 "trsvcid": "44276" 00:16:07.252 }, 00:16:07.252 "auth": { 00:16:07.252 "state": "completed", 00:16:07.252 "digest": "sha384", 00:16:07.252 "dhgroup": "ffdhe8192" 00:16:07.252 } 00:16:07.252 } 00:16:07.252 ]' 00:16:07.252 05:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.252 05:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:07.252 05:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.517 05:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:07.517 05:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.517 05:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.517 05:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.517 05:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.776 05:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:16:07.776 05:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:16:08.711 05:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.711 05:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:16:08.711 05:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.711 05:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.711 05:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.711 05:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.711 05:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:08.711 05:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:08.970 05:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:08.970 05:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.970 05:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:08.970 05:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:08.970 05:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:08.970 05:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.970 05:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key3 00:16:08.970 05:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.970 05:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.970 05:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.970 05:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:08.970 05:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:08.970 05:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:09.904 00:16:09.904 05:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.904 05:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.904 05:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.163 05:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.163 05:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.163 05:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.163 05:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.163 05:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.163 05:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.163 { 00:16:10.163 "cntlid": 95, 00:16:10.163 "qid": 0, 00:16:10.163 "state": "enabled", 00:16:10.163 "thread": "nvmf_tgt_poll_group_000", 00:16:10.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:16:10.163 "listen_address": { 00:16:10.163 "trtype": "TCP", 00:16:10.163 "adrfam": "IPv4", 00:16:10.163 "traddr": "10.0.0.3", 00:16:10.163 "trsvcid": "4420" 00:16:10.163 }, 00:16:10.163 "peer_address": { 00:16:10.163 "trtype": "TCP", 00:16:10.163 "adrfam": "IPv4", 00:16:10.163 "traddr": "10.0.0.1", 00:16:10.163 "trsvcid": "44312" 00:16:10.163 }, 00:16:10.163 "auth": { 00:16:10.163 "state": "completed", 00:16:10.163 "digest": "sha384", 00:16:10.163 "dhgroup": "ffdhe8192" 00:16:10.163 } 00:16:10.163 } 00:16:10.163 ]' 00:16:10.163 05:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.163 05:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.163 05:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.163 05:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:10.163 05:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.163 05:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.163 05:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.163 05:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.422 05:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:16:10.422 05:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:16:11.356 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.356 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:16:11.356 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.356 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.356 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.356 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:11.356 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:11.356 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.356 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:11.356 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:11.711 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:11.711 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.711 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:11.711 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:11.711 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:11.711 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.711 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.711 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.711 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.711 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.711 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.711 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.711 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.970 00:16:11.970 05:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.970 05:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.970 05:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.228 05:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.228 05:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.228 05:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.228 05:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.228 05:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.228 05:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.228 { 00:16:12.228 "cntlid": 97, 00:16:12.228 "qid": 0, 00:16:12.228 "state": "enabled", 00:16:12.228 "thread": "nvmf_tgt_poll_group_000", 00:16:12.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:16:12.229 "listen_address": { 00:16:12.229 "trtype": "TCP", 00:16:12.229 "adrfam": "IPv4", 00:16:12.229 "traddr": "10.0.0.3", 00:16:12.229 "trsvcid": "4420" 00:16:12.229 }, 00:16:12.229 "peer_address": { 00:16:12.229 "trtype": "TCP", 00:16:12.229 "adrfam": "IPv4", 00:16:12.229 "traddr": "10.0.0.1", 00:16:12.229 "trsvcid": "44348" 00:16:12.229 }, 00:16:12.229 "auth": { 00:16:12.229 "state": "completed", 00:16:12.229 "digest": "sha512", 00:16:12.229 "dhgroup": "null" 00:16:12.229 } 00:16:12.229 } 00:16:12.229 ]' 00:16:12.229 05:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.487 05:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:12.487 05:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.487 05:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:12.487 05:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.487 05:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.487 05:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.487 05:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.745 05:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:16:12.745 05:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:16:13.680 05:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.680 05:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:16:13.680 05:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.680 05:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.680 05:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.680 05:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.680 05:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:13.680 05:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:13.938 05:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:13.938 05:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.938 05:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:13.938 05:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:13.938 05:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:13.938 05:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.938 05:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.938 05:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.938 05:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.938 05:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.938 05:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.938 05:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.938 05:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.196 00:16:14.196 05:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.196 05:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.196 05:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.454 05:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.454 05:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.454 05:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.454 05:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.713 05:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.713 05:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.713 { 00:16:14.713 "cntlid": 99, 00:16:14.713 "qid": 0, 00:16:14.713 "state": "enabled", 00:16:14.713 "thread": "nvmf_tgt_poll_group_000", 00:16:14.713 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:16:14.713 "listen_address": { 00:16:14.713 "trtype": "TCP", 00:16:14.713 "adrfam": "IPv4", 00:16:14.713 "traddr": "10.0.0.3", 00:16:14.713 "trsvcid": "4420" 00:16:14.713 }, 00:16:14.713 "peer_address": { 00:16:14.713 "trtype": "TCP", 00:16:14.713 "adrfam": "IPv4", 00:16:14.713 "traddr": "10.0.0.1", 00:16:14.713 "trsvcid": "44372" 00:16:14.713 }, 00:16:14.713 "auth": { 00:16:14.713 "state": "completed", 00:16:14.713 "digest": "sha512", 00:16:14.713 "dhgroup": "null" 00:16:14.713 } 00:16:14.713 } 00:16:14.713 ]' 00:16:14.713 05:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.713 05:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:14.713 05:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.713 05:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:14.713 05:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.713 05:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.713 05:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.713 05:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.278 05:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:16:15.278 05:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:16:15.844 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.844 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:16:15.844 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.844 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.844 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.844 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.844 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:15.844 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:16.102 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:16.102 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.103 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:16.103 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:16.103 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:16.103 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.103 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.103 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.103 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.103 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.103 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.103 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.103 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.670 00:16:16.670 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.670 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.670 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.929 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.929 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.929 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.929 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.929 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.929 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.929 { 00:16:16.929 "cntlid": 101, 00:16:16.929 "qid": 0, 00:16:16.929 "state": "enabled", 00:16:16.929 "thread": "nvmf_tgt_poll_group_000", 00:16:16.929 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:16:16.929 "listen_address": { 00:16:16.929 "trtype": "TCP", 00:16:16.929 "adrfam": "IPv4", 00:16:16.929 "traddr": "10.0.0.3", 00:16:16.929 "trsvcid": "4420" 00:16:16.929 }, 00:16:16.929 "peer_address": { 00:16:16.929 "trtype": "TCP", 00:16:16.929 "adrfam": "IPv4", 00:16:16.929 "traddr": "10.0.0.1", 00:16:16.929 "trsvcid": "58236" 00:16:16.929 }, 00:16:16.929 "auth": { 00:16:16.929 "state": "completed", 00:16:16.929 "digest": "sha512", 00:16:16.929 "dhgroup": "null" 00:16:16.929 } 00:16:16.929 } 00:16:16.929 ]' 00:16:16.929 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.929 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:16.929 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.929 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:16.929 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.929 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.929 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.929 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.498 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:16:17.498 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:16:18.077 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.077 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:16:18.077 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.077 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.077 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.077 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.077 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:18.078 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:18.357 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:18.357 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.357 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:18.357 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:18.357 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:18.357 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.357 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key3 00:16:18.357 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.357 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.357 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.357 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:18.357 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.357 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.924 00:16:18.924 05:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.924 05:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.924 05:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.182 05:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.182 05:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.182 05:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.182 05:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.182 05:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.182 05:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.182 { 00:16:19.182 "cntlid": 103, 00:16:19.182 "qid": 0, 00:16:19.182 "state": "enabled", 00:16:19.182 "thread": "nvmf_tgt_poll_group_000", 00:16:19.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:16:19.182 "listen_address": { 00:16:19.182 "trtype": "TCP", 00:16:19.182 "adrfam": "IPv4", 00:16:19.182 "traddr": "10.0.0.3", 00:16:19.182 "trsvcid": "4420" 00:16:19.182 }, 00:16:19.182 "peer_address": { 00:16:19.182 "trtype": "TCP", 00:16:19.182 "adrfam": "IPv4", 00:16:19.182 "traddr": "10.0.0.1", 00:16:19.182 "trsvcid": "58270" 00:16:19.182 }, 00:16:19.182 "auth": { 00:16:19.182 "state": "completed", 00:16:19.182 "digest": "sha512", 00:16:19.182 "dhgroup": "null" 00:16:19.182 } 00:16:19.182 } 00:16:19.182 ]' 00:16:19.182 05:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.182 05:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:19.182 05:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.182 05:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:19.182 05:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.441 05:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.441 05:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.441 05:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.699 05:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:16:19.699 05:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:16:20.266 05:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.266 05:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:16:20.266 05:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.266 05:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.266 05:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.266 05:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:20.266 05:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.266 05:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:20.266 05:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:20.833 05:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:20.833 05:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.833 05:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:20.833 05:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:20.833 05:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:20.833 05:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.833 05:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.833 05:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.833 05:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.833 05:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.833 05:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.833 05:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.833 05:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.092 00:16:21.092 05:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.092 05:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.092 05:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.350 05:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.350 05:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.350 05:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.350 05:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.350 05:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.350 05:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.350 { 00:16:21.350 "cntlid": 105, 00:16:21.350 "qid": 0, 00:16:21.350 "state": "enabled", 00:16:21.350 "thread": "nvmf_tgt_poll_group_000", 00:16:21.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:16:21.350 "listen_address": { 00:16:21.350 "trtype": "TCP", 00:16:21.350 "adrfam": "IPv4", 00:16:21.350 "traddr": "10.0.0.3", 00:16:21.350 "trsvcid": "4420" 00:16:21.350 }, 00:16:21.350 "peer_address": { 00:16:21.350 "trtype": "TCP", 00:16:21.350 "adrfam": "IPv4", 00:16:21.350 "traddr": "10.0.0.1", 00:16:21.350 "trsvcid": "58288" 00:16:21.350 }, 00:16:21.350 "auth": { 00:16:21.350 "state": "completed", 00:16:21.350 "digest": "sha512", 00:16:21.350 "dhgroup": "ffdhe2048" 00:16:21.351 } 00:16:21.351 } 00:16:21.351 ]' 00:16:21.351 05:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.351 05:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:21.351 05:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.351 05:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:21.351 05:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.609 05:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.609 05:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.609 05:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.867 05:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:16:21.867 05:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:16:22.801 05:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.801 05:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:16:22.801 05:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.801 05:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.801 05:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.801 05:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.801 05:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:22.801 05:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:23.060 05:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:23.060 05:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.060 05:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:23.060 05:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:23.060 05:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:23.060 05:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.060 05:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.060 05:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.060 05:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.060 05:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.060 05:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.060 05:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.060 05:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.318 00:16:23.318 05:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.318 05:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.318 05:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.576 05:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.576 05:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.576 05:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.576 05:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.576 05:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.576 05:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.576 { 00:16:23.576 "cntlid": 107, 00:16:23.576 "qid": 0, 00:16:23.576 "state": "enabled", 00:16:23.576 "thread": "nvmf_tgt_poll_group_000", 00:16:23.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:16:23.576 "listen_address": { 00:16:23.576 "trtype": "TCP", 00:16:23.576 "adrfam": "IPv4", 00:16:23.576 "traddr": "10.0.0.3", 00:16:23.576 "trsvcid": "4420" 00:16:23.576 }, 00:16:23.576 "peer_address": { 00:16:23.576 "trtype": "TCP", 00:16:23.576 "adrfam": "IPv4", 00:16:23.576 "traddr": "10.0.0.1", 00:16:23.576 "trsvcid": "58332" 00:16:23.576 }, 00:16:23.576 "auth": { 00:16:23.576 "state": "completed", 00:16:23.576 "digest": "sha512", 00:16:23.576 "dhgroup": "ffdhe2048" 00:16:23.576 } 00:16:23.576 } 00:16:23.576 ]' 00:16:23.576 05:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.576 05:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.576 05:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.834 05:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:23.834 05:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.834 05:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.834 05:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.834 05:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.093 05:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:16:24.093 05:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:16:25.028 05:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.028 05:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:16:25.028 05:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.028 05:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.028 05:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.028 05:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.028 05:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:25.028 05:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:25.287 05:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:25.287 05:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.287 05:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:25.287 05:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:25.287 05:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:25.287 05:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.287 05:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.287 05:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.287 05:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.287 05:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.287 05:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.287 05:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.287 05:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.546 00:16:25.546 05:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.546 05:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.546 05:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.112 05:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.112 05:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.112 05:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.112 05:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.112 05:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.112 05:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.112 { 00:16:26.112 "cntlid": 109, 00:16:26.112 "qid": 0, 00:16:26.112 "state": "enabled", 00:16:26.112 "thread": "nvmf_tgt_poll_group_000", 00:16:26.112 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:16:26.112 "listen_address": { 00:16:26.112 "trtype": "TCP", 00:16:26.112 "adrfam": "IPv4", 00:16:26.112 "traddr": "10.0.0.3", 00:16:26.112 "trsvcid": "4420" 00:16:26.112 }, 00:16:26.112 "peer_address": { 00:16:26.112 "trtype": "TCP", 00:16:26.112 "adrfam": "IPv4", 00:16:26.112 "traddr": "10.0.0.1", 00:16:26.112 "trsvcid": "58346" 00:16:26.112 }, 00:16:26.112 "auth": { 00:16:26.112 "state": "completed", 00:16:26.112 "digest": "sha512", 00:16:26.112 "dhgroup": "ffdhe2048" 00:16:26.112 } 00:16:26.112 } 00:16:26.112 ]' 00:16:26.112 05:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.112 05:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.112 05:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.112 05:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:26.112 05:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.112 05:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.112 05:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.112 05:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.372 05:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:16:26.373 05:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:16:27.312 05:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.313 05:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:16:27.313 05:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.313 05:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.313 05:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.313 05:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.313 05:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:27.313 05:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:27.572 05:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:27.572 05:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.572 05:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:27.572 05:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:27.572 05:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:27.572 05:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.572 05:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key3 00:16:27.572 05:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.572 05:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.572 05:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.572 05:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:27.572 05:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.572 05:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.830 00:16:27.830 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.830 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.830 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.398 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.398 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.398 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.398 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.398 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.398 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.398 { 00:16:28.398 "cntlid": 111, 00:16:28.398 "qid": 0, 00:16:28.398 "state": "enabled", 00:16:28.398 "thread": "nvmf_tgt_poll_group_000", 00:16:28.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:16:28.398 "listen_address": { 00:16:28.398 "trtype": "TCP", 00:16:28.398 "adrfam": "IPv4", 00:16:28.398 "traddr": "10.0.0.3", 00:16:28.398 "trsvcid": "4420" 00:16:28.398 }, 00:16:28.398 "peer_address": { 00:16:28.398 "trtype": "TCP", 00:16:28.398 "adrfam": "IPv4", 00:16:28.398 "traddr": "10.0.0.1", 00:16:28.398 "trsvcid": "41878" 00:16:28.398 }, 00:16:28.398 "auth": { 00:16:28.398 "state": "completed", 00:16:28.398 "digest": "sha512", 00:16:28.398 "dhgroup": "ffdhe2048" 00:16:28.398 } 00:16:28.398 } 00:16:28.398 ]' 00:16:28.398 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.398 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.398 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.398 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:28.398 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.398 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.398 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.398 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.656 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:16:28.656 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:16:29.592 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.592 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:16:29.592 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.592 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.592 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.592 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.592 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.592 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:29.592 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:29.851 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:29.851 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.851 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:29.851 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:29.851 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:29.851 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.851 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.851 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.851 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.851 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.851 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.851 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.851 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.108 00:16:30.108 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.108 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.108 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.674 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.674 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.674 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.674 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.674 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.674 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.674 { 00:16:30.674 "cntlid": 113, 00:16:30.674 "qid": 0, 00:16:30.674 "state": "enabled", 00:16:30.674 "thread": "nvmf_tgt_poll_group_000", 00:16:30.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:16:30.674 "listen_address": { 00:16:30.674 "trtype": "TCP", 00:16:30.674 "adrfam": "IPv4", 00:16:30.674 "traddr": "10.0.0.3", 00:16:30.674 "trsvcid": "4420" 00:16:30.674 }, 00:16:30.674 "peer_address": { 00:16:30.674 "trtype": "TCP", 00:16:30.674 "adrfam": "IPv4", 00:16:30.674 "traddr": "10.0.0.1", 00:16:30.674 "trsvcid": "41902" 00:16:30.674 }, 00:16:30.674 "auth": { 00:16:30.674 "state": "completed", 00:16:30.674 "digest": "sha512", 00:16:30.674 "dhgroup": "ffdhe3072" 00:16:30.674 } 00:16:30.674 } 00:16:30.674 ]' 00:16:30.674 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.674 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.674 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.674 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:30.674 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.933 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:16:30.933 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:16:31.866 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.866 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:16:31.866 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.866 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.867 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.867 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.867 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:31.867 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:31.867 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:31.867 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.867 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:31.867 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:31.867 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:31.867 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.867 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.867 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.867 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.867 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.867 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.867 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.867 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.436 00:16:32.436 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.436 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.436 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.694 05:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.694 05:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.694 05:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.694 05:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.694 05:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.694 05:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.694 { 00:16:32.694 "cntlid": 115, 00:16:32.694 "qid": 0, 00:16:32.694 "state": "enabled", 00:16:32.694 "thread": "nvmf_tgt_poll_group_000", 00:16:32.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:16:32.694 "listen_address": { 00:16:32.694 "trtype": "TCP", 00:16:32.694 "adrfam": "IPv4", 00:16:32.694 "traddr": "10.0.0.3", 00:16:32.694 "trsvcid": "4420" 00:16:32.694 }, 00:16:32.694 "peer_address": { 00:16:32.694 "trtype": "TCP", 00:16:32.694 "adrfam": "IPv4", 00:16:32.694 "traddr": "10.0.0.1", 00:16:32.694 "trsvcid": "41936" 00:16:32.694 }, 00:16:32.694 "auth": { 00:16:32.694 "state": "completed", 00:16:32.694 "digest": "sha512", 00:16:32.694 "dhgroup": "ffdhe3072" 00:16:32.694 } 00:16:32.694 } 00:16:32.694 ]' 00:16:32.694 05:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.694 05:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.694 05:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.694 05:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:32.694 05:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.951 05:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.951 05:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.951 05:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.209 05:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:16:33.209 05:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:16:34.141 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.141 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:16:34.141 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.141 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.141 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.141 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.141 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:34.141 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:34.141 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:34.141 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.141 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:34.141 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:34.141 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:34.141 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.142 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.142 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.142 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.142 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.142 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.142 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.142 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.708 00:16:34.708 05:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.708 05:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.708 05:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.966 05:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.966 05:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.966 05:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.966 05:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.966 05:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.966 05:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.966 { 00:16:34.966 "cntlid": 117, 00:16:34.966 "qid": 0, 00:16:34.966 "state": "enabled", 00:16:34.966 "thread": "nvmf_tgt_poll_group_000", 00:16:34.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:16:34.966 "listen_address": { 00:16:34.966 "trtype": "TCP", 00:16:34.966 "adrfam": "IPv4", 00:16:34.966 "traddr": "10.0.0.3", 00:16:34.966 "trsvcid": "4420" 00:16:34.966 }, 00:16:34.966 "peer_address": { 00:16:34.966 "trtype": "TCP", 00:16:34.966 "adrfam": "IPv4", 00:16:34.966 "traddr": "10.0.0.1", 00:16:34.966 "trsvcid": "41946" 00:16:34.966 }, 00:16:34.966 "auth": { 00:16:34.966 "state": "completed", 00:16:34.966 "digest": "sha512", 00:16:34.966 "dhgroup": "ffdhe3072" 00:16:34.966 } 00:16:34.966 } 00:16:34.966 ]' 00:16:34.966 05:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.966 05:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.966 05:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.224 05:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:35.224 05:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.224 05:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.224 05:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.224 05:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.482 05:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:16:35.482 05:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:16:36.415 05:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.415 05:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:16:36.415 05:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.415 05:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.415 05:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.415 05:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.415 05:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:36.415 05:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:36.415 05:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:36.415 05:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.415 05:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:36.415 05:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:36.415 05:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:36.415 05:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.415 05:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key3 00:16:36.415 05:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.415 05:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.674 05:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.674 05:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:36.674 05:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:36.674 05:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:36.932 00:16:36.932 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.932 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.932 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.497 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.497 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.497 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.497 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.497 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.497 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.497 { 00:16:37.497 "cntlid": 119, 00:16:37.497 "qid": 0, 00:16:37.497 "state": "enabled", 00:16:37.497 "thread": "nvmf_tgt_poll_group_000", 00:16:37.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:16:37.497 "listen_address": { 00:16:37.497 "trtype": "TCP", 00:16:37.497 "adrfam": "IPv4", 00:16:37.497 "traddr": "10.0.0.3", 00:16:37.497 "trsvcid": "4420" 00:16:37.497 }, 00:16:37.497 "peer_address": { 00:16:37.497 "trtype": "TCP", 00:16:37.497 "adrfam": "IPv4", 00:16:37.497 "traddr": "10.0.0.1", 00:16:37.497 "trsvcid": "55986" 00:16:37.497 }, 00:16:37.497 "auth": { 00:16:37.497 "state": "completed", 00:16:37.497 "digest": "sha512", 00:16:37.497 "dhgroup": "ffdhe3072" 00:16:37.497 } 00:16:37.497 } 00:16:37.497 ]' 00:16:37.497 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.497 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:37.497 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.497 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:37.497 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.497 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.497 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.497 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.754 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:16:37.754 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:16:38.690 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.690 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:16:38.690 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.690 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.690 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.690 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.690 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.690 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:38.690 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:38.948 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:38.948 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.948 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:38.948 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:38.948 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:38.948 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.948 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.948 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.948 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.948 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.948 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.948 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.948 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.205 00:16:39.463 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.463 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.463 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.721 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.721 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.721 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.721 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.721 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.721 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.721 { 00:16:39.721 "cntlid": 121, 00:16:39.721 "qid": 0, 00:16:39.721 "state": "enabled", 00:16:39.721 "thread": "nvmf_tgt_poll_group_000", 00:16:39.721 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:16:39.721 "listen_address": { 00:16:39.721 "trtype": "TCP", 00:16:39.721 "adrfam": "IPv4", 00:16:39.721 "traddr": "10.0.0.3", 00:16:39.721 "trsvcid": "4420" 00:16:39.721 }, 00:16:39.721 "peer_address": { 00:16:39.721 "trtype": "TCP", 00:16:39.721 "adrfam": "IPv4", 00:16:39.721 "traddr": "10.0.0.1", 00:16:39.721 "trsvcid": "56010" 00:16:39.721 }, 00:16:39.721 "auth": { 00:16:39.721 "state": "completed", 00:16:39.721 "digest": "sha512", 00:16:39.721 "dhgroup": "ffdhe4096" 00:16:39.721 } 00:16:39.721 } 00:16:39.721 ]' 00:16:39.721 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.721 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.721 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.721 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:39.721 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.721 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.721 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.721 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.287 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:16:40.287 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:16:40.856 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.856 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:16:40.856 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.856 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.856 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.856 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.856 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:40.856 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:41.119 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:41.119 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.119 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:41.119 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:41.119 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:41.119 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.119 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.119 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.119 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.119 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.119 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.119 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.119 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.691 00:16:41.691 05:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.691 05:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.691 05:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.950 05:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.950 05:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.950 05:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.950 05:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.950 05:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.950 05:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.950 { 00:16:41.950 "cntlid": 123, 00:16:41.950 "qid": 0, 00:16:41.950 "state": "enabled", 00:16:41.950 "thread": "nvmf_tgt_poll_group_000", 00:16:41.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:16:41.950 "listen_address": { 00:16:41.950 "trtype": "TCP", 00:16:41.950 "adrfam": "IPv4", 00:16:41.950 "traddr": "10.0.0.3", 00:16:41.950 "trsvcid": "4420" 00:16:41.950 }, 00:16:41.950 "peer_address": { 00:16:41.950 "trtype": "TCP", 00:16:41.950 "adrfam": "IPv4", 00:16:41.950 "traddr": "10.0.0.1", 00:16:41.950 "trsvcid": "56048" 00:16:41.950 }, 00:16:41.950 "auth": { 00:16:41.950 "state": "completed", 00:16:41.950 "digest": "sha512", 00:16:41.950 "dhgroup": "ffdhe4096" 00:16:41.950 } 00:16:41.950 } 00:16:41.950 ]' 00:16:41.950 05:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.950 05:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.950 05:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.950 05:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:41.950 05:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.950 05:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.950 05:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.951 05:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.517 05:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:16:42.517 05:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:16:43.083 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.083 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:16:43.083 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.083 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.083 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.083 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.083 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:43.083 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:43.342 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:43.342 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.342 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:43.342 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:43.342 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:43.342 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.342 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.342 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.342 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.342 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.342 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.342 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.342 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.909 00:16:43.909 05:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.909 05:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.909 05:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.167 05:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.167 05:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.167 05:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.167 05:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.167 05:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.167 05:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.167 { 00:16:44.167 "cntlid": 125, 00:16:44.167 "qid": 0, 00:16:44.167 "state": "enabled", 00:16:44.168 "thread": "nvmf_tgt_poll_group_000", 00:16:44.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:16:44.168 "listen_address": { 00:16:44.168 "trtype": "TCP", 00:16:44.168 "adrfam": "IPv4", 00:16:44.168 "traddr": "10.0.0.3", 00:16:44.168 "trsvcid": "4420" 00:16:44.168 }, 00:16:44.168 "peer_address": { 00:16:44.168 "trtype": "TCP", 00:16:44.168 "adrfam": "IPv4", 00:16:44.168 "traddr": "10.0.0.1", 00:16:44.168 "trsvcid": "56072" 00:16:44.168 }, 00:16:44.168 "auth": { 00:16:44.168 "state": "completed", 00:16:44.168 "digest": "sha512", 00:16:44.168 "dhgroup": "ffdhe4096" 00:16:44.168 } 00:16:44.168 } 00:16:44.168 ]' 00:16:44.168 05:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.168 05:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:44.168 05:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.168 05:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:44.168 05:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.168 05:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.168 05:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.168 05:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.735 05:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:16:44.735 05:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:16:45.301 05:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.301 05:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:16:45.301 05:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.301 05:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.301 05:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.301 05:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.301 05:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:45.301 05:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:45.560 05:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:45.560 05:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.560 05:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:45.560 05:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:45.560 05:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:45.560 05:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.560 05:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key3 00:16:45.560 05:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.560 05:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.560 05:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.560 05:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:45.560 05:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.560 05:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.818 00:16:45.818 05:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.818 05:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.818 05:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.386 05:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.386 05:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.386 05:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.386 05:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.386 05:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.386 05:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.386 { 00:16:46.386 "cntlid": 127, 00:16:46.386 "qid": 0, 00:16:46.386 "state": "enabled", 00:16:46.386 "thread": "nvmf_tgt_poll_group_000", 00:16:46.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:16:46.386 "listen_address": { 00:16:46.386 "trtype": "TCP", 00:16:46.386 "adrfam": "IPv4", 00:16:46.386 "traddr": "10.0.0.3", 00:16:46.386 "trsvcid": "4420" 00:16:46.386 }, 00:16:46.386 "peer_address": { 00:16:46.386 "trtype": "TCP", 00:16:46.386 "adrfam": "IPv4", 00:16:46.386 "traddr": "10.0.0.1", 00:16:46.386 "trsvcid": "56108" 00:16:46.386 }, 00:16:46.386 "auth": { 00:16:46.386 "state": "completed", 00:16:46.386 "digest": "sha512", 00:16:46.386 "dhgroup": "ffdhe4096" 00:16:46.386 } 00:16:46.386 } 00:16:46.386 ]' 00:16:46.386 05:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.386 05:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.386 05:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.386 05:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:46.386 05:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.386 05:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.386 05:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.386 05:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.644 05:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:16:46.644 05:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:16:47.579 05:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.579 05:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:16:47.579 05:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.579 05:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.579 05:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.579 05:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.579 05:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.579 05:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:47.579 05:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:47.837 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:47.837 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.837 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:47.837 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:47.837 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:47.837 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.838 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.838 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.838 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.838 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.838 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.838 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.838 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.096 00:16:48.355 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.355 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.355 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.613 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.613 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.613 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.613 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.613 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.613 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.613 { 00:16:48.613 "cntlid": 129, 00:16:48.613 "qid": 0, 00:16:48.613 "state": "enabled", 00:16:48.613 "thread": "nvmf_tgt_poll_group_000", 00:16:48.613 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:16:48.613 "listen_address": { 00:16:48.613 "trtype": "TCP", 00:16:48.613 "adrfam": "IPv4", 00:16:48.613 "traddr": "10.0.0.3", 00:16:48.613 "trsvcid": "4420" 00:16:48.613 }, 00:16:48.613 "peer_address": { 00:16:48.613 "trtype": "TCP", 00:16:48.613 "adrfam": "IPv4", 00:16:48.613 "traddr": "10.0.0.1", 00:16:48.613 "trsvcid": "58926" 00:16:48.613 }, 00:16:48.613 "auth": { 00:16:48.613 "state": "completed", 00:16:48.613 "digest": "sha512", 00:16:48.613 "dhgroup": "ffdhe6144" 00:16:48.613 } 00:16:48.613 } 00:16:48.613 ]' 00:16:48.613 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.613 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.613 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.613 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:48.613 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.871 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.871 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.871 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.130 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:16:49.130 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:16:49.698 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.698 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:16:49.698 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.698 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.698 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.698 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.698 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:49.698 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:50.011 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:50.011 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.011 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:50.011 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:50.011 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:50.011 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.011 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.011 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.011 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.011 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.011 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.011 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.011 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.579 00:16:50.579 05:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.579 05:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.579 05:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.146 05:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.146 05:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.146 05:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.146 05:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.146 05:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.146 05:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.146 { 00:16:51.146 "cntlid": 131, 00:16:51.146 "qid": 0, 00:16:51.146 "state": "enabled", 00:16:51.146 "thread": "nvmf_tgt_poll_group_000", 00:16:51.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:16:51.146 "listen_address": { 00:16:51.146 "trtype": "TCP", 00:16:51.146 "adrfam": "IPv4", 00:16:51.146 "traddr": "10.0.0.3", 00:16:51.146 "trsvcid": "4420" 00:16:51.146 }, 00:16:51.146 "peer_address": { 00:16:51.146 "trtype": "TCP", 00:16:51.146 "adrfam": "IPv4", 00:16:51.146 "traddr": "10.0.0.1", 00:16:51.146 "trsvcid": "58956" 00:16:51.146 }, 00:16:51.146 "auth": { 00:16:51.146 "state": "completed", 00:16:51.146 "digest": "sha512", 00:16:51.146 "dhgroup": "ffdhe6144" 00:16:51.146 } 00:16:51.146 } 00:16:51.146 ]' 00:16:51.146 05:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.146 05:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.146 05:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.146 05:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:51.146 05:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.146 05:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.146 05:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.146 05:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.404 05:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:16:51.404 05:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:16:52.340 05:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.340 05:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:16:52.340 05:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.340 05:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.340 05:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.340 05:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.340 05:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:52.340 05:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:52.340 05:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:52.340 05:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.340 05:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:52.340 05:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:52.340 05:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:52.340 05:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.340 05:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.340 05:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.340 05:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.340 05:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.340 05:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.340 05:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.340 05:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.906 00:16:52.906 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.906 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.906 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.472 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.472 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.472 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.472 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.472 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.472 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.472 { 00:16:53.472 "cntlid": 133, 00:16:53.472 "qid": 0, 00:16:53.472 "state": "enabled", 00:16:53.472 "thread": "nvmf_tgt_poll_group_000", 00:16:53.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:16:53.472 "listen_address": { 00:16:53.472 "trtype": "TCP", 00:16:53.472 "adrfam": "IPv4", 00:16:53.472 "traddr": "10.0.0.3", 00:16:53.472 "trsvcid": "4420" 00:16:53.472 }, 00:16:53.472 "peer_address": { 00:16:53.472 "trtype": "TCP", 00:16:53.472 "adrfam": "IPv4", 00:16:53.472 "traddr": "10.0.0.1", 00:16:53.472 "trsvcid": "58982" 00:16:53.472 }, 00:16:53.472 "auth": { 00:16:53.472 "state": "completed", 00:16:53.472 "digest": "sha512", 00:16:53.472 "dhgroup": "ffdhe6144" 00:16:53.472 } 00:16:53.472 } 00:16:53.472 ]' 00:16:53.472 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.472 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.472 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.472 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:53.472 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.472 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.472 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.472 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.730 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:16:53.730 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:16:54.665 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.665 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:16:54.665 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.665 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.665 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.665 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.665 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:54.665 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:54.923 05:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:54.923 05:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.923 05:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:54.923 05:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:54.924 05:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:54.924 05:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.924 05:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key3 00:16:54.924 05:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.924 05:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.924 05:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.924 05:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:54.924 05:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.924 05:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.491 00:16:55.491 05:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.491 05:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.491 05:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.749 05:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.749 05:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.750 05:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.750 05:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.750 05:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.750 05:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.750 { 00:16:55.750 "cntlid": 135, 00:16:55.750 "qid": 0, 00:16:55.750 "state": "enabled", 00:16:55.750 "thread": "nvmf_tgt_poll_group_000", 00:16:55.750 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:16:55.750 "listen_address": { 00:16:55.750 "trtype": "TCP", 00:16:55.750 "adrfam": "IPv4", 00:16:55.750 "traddr": "10.0.0.3", 00:16:55.750 "trsvcid": "4420" 00:16:55.750 }, 00:16:55.750 "peer_address": { 00:16:55.750 "trtype": "TCP", 00:16:55.750 "adrfam": "IPv4", 00:16:55.750 "traddr": "10.0.0.1", 00:16:55.750 "trsvcid": "58994" 00:16:55.750 }, 00:16:55.750 "auth": { 00:16:55.750 "state": "completed", 00:16:55.750 "digest": "sha512", 00:16:55.750 "dhgroup": "ffdhe6144" 00:16:55.750 } 00:16:55.750 } 00:16:55.750 ]' 00:16:55.750 05:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.750 05:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.750 05:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.750 05:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:55.750 05:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.008 05:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.008 05:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.008 05:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.266 05:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:16:56.266 05:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:16:56.834 05:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.834 05:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:16:56.834 05:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.834 05:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.834 05:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.834 05:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.834 05:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.834 05:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:56.834 05:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:57.092 05:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:57.092 05:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.092 05:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:57.092 05:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:57.092 05:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:57.092 05:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.092 05:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.092 05:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.092 05:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.092 05:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.092 05:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.092 05:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.093 05:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.027 00:16:58.027 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.027 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.027 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.286 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.286 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.286 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.286 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.286 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.286 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.286 { 00:16:58.286 "cntlid": 137, 00:16:58.286 "qid": 0, 00:16:58.286 "state": "enabled", 00:16:58.286 "thread": "nvmf_tgt_poll_group_000", 00:16:58.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:16:58.286 "listen_address": { 00:16:58.286 "trtype": "TCP", 00:16:58.286 "adrfam": "IPv4", 00:16:58.286 "traddr": "10.0.0.3", 00:16:58.286 "trsvcid": "4420" 00:16:58.286 }, 00:16:58.286 "peer_address": { 00:16:58.286 "trtype": "TCP", 00:16:58.286 "adrfam": "IPv4", 00:16:58.286 "traddr": "10.0.0.1", 00:16:58.286 "trsvcid": "52844" 00:16:58.286 }, 00:16:58.286 "auth": { 00:16:58.286 "state": "completed", 00:16:58.286 "digest": "sha512", 00:16:58.286 "dhgroup": "ffdhe8192" 00:16:58.286 } 00:16:58.286 } 00:16:58.286 ]' 00:16:58.286 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.286 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.286 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.286 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:58.286 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.286 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.286 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.286 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.543 05:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:16:58.543 05:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:16:59.476 05:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.476 05:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:16:59.477 05:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.477 05:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.477 05:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.477 05:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.477 05:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:59.477 05:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:59.736 05:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:59.736 05:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.736 05:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:59.736 05:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:59.736 05:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:59.736 05:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.736 05:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.736 05:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.736 05:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.736 05:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.736 05:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.736 05:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.736 05:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.673 00:17:00.673 05:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.673 05:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.673 05:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.932 05:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.932 05:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.932 05:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.932 05:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.932 05:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.932 05:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.932 { 00:17:00.932 "cntlid": 139, 00:17:00.932 "qid": 0, 00:17:00.932 "state": "enabled", 00:17:00.932 "thread": "nvmf_tgt_poll_group_000", 00:17:00.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:17:00.932 "listen_address": { 00:17:00.932 "trtype": "TCP", 00:17:00.932 "adrfam": "IPv4", 00:17:00.932 "traddr": "10.0.0.3", 00:17:00.932 "trsvcid": "4420" 00:17:00.932 }, 00:17:00.932 "peer_address": { 00:17:00.932 "trtype": "TCP", 00:17:00.932 "adrfam": "IPv4", 00:17:00.932 "traddr": "10.0.0.1", 00:17:00.932 "trsvcid": "52862" 00:17:00.932 }, 00:17:00.932 "auth": { 00:17:00.932 "state": "completed", 00:17:00.932 "digest": "sha512", 00:17:00.932 "dhgroup": "ffdhe8192" 00:17:00.932 } 00:17:00.932 } 00:17:00.932 ]' 00:17:00.932 05:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.932 05:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.932 05:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.932 05:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.932 05:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.932 05:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.932 05:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.932 05:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.191 05:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:17:01.191 05:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: --dhchap-ctrl-secret DHHC-1:02:NDFjODkzMTMxMTMyYjQ3MzNiNDljNjM5YWQ0NzQ3ODllMjdjYzNlYmVkOTNjNzJlHdr7gw==: 00:17:02.127 05:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.127 05:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:17:02.127 05:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.127 05:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.127 05:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.127 05:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.127 05:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:02.127 05:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:02.385 05:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:02.385 05:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.385 05:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:02.385 05:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:02.385 05:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:02.385 05:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.385 05:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.385 05:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.385 05:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.385 05:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.385 05:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.385 05:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.385 05:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.958 00:17:02.958 05:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.958 05:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.958 05:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.550 05:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.550 05:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.550 05:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.550 05:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.550 05:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.550 05:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.550 { 00:17:03.550 "cntlid": 141, 00:17:03.550 "qid": 0, 00:17:03.550 "state": "enabled", 00:17:03.550 "thread": "nvmf_tgt_poll_group_000", 00:17:03.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:17:03.550 "listen_address": { 00:17:03.550 "trtype": "TCP", 00:17:03.550 "adrfam": "IPv4", 00:17:03.550 "traddr": "10.0.0.3", 00:17:03.550 "trsvcid": "4420" 00:17:03.550 }, 00:17:03.550 "peer_address": { 00:17:03.550 "trtype": "TCP", 00:17:03.550 "adrfam": "IPv4", 00:17:03.550 "traddr": "10.0.0.1", 00:17:03.550 "trsvcid": "52892" 00:17:03.550 }, 00:17:03.550 "auth": { 00:17:03.550 "state": "completed", 00:17:03.550 "digest": "sha512", 00:17:03.550 "dhgroup": "ffdhe8192" 00:17:03.550 } 00:17:03.550 } 00:17:03.550 ]' 00:17:03.550 05:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.550 05:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.550 05:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.550 05:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:03.550 05:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.550 05:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.550 05:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.550 05:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.808 05:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:17:03.808 05:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:01:NTE1N2EzNDc3NGYxNTk0ZTE5MGZjZTdiMTUzNzUxNDRPfDG8: 00:17:04.744 05:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.744 05:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:17:04.744 05:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.744 05:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.744 05:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.744 05:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.744 05:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:04.744 05:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:05.003 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:05.003 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.003 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:05.003 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:05.003 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:05.003 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.003 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key3 00:17:05.003 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.003 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.003 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.003 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:05.003 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:05.003 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:05.569 00:17:05.569 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.569 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.569 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.828 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.086 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.086 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.086 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.086 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.086 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.086 { 00:17:06.086 "cntlid": 143, 00:17:06.086 "qid": 0, 00:17:06.086 "state": "enabled", 00:17:06.086 "thread": "nvmf_tgt_poll_group_000", 00:17:06.086 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:17:06.086 "listen_address": { 00:17:06.086 "trtype": "TCP", 00:17:06.086 "adrfam": "IPv4", 00:17:06.086 "traddr": "10.0.0.3", 00:17:06.086 "trsvcid": "4420" 00:17:06.086 }, 00:17:06.086 "peer_address": { 00:17:06.086 "trtype": "TCP", 00:17:06.086 "adrfam": "IPv4", 00:17:06.086 "traddr": "10.0.0.1", 00:17:06.086 "trsvcid": "52916" 00:17:06.086 }, 00:17:06.086 "auth": { 00:17:06.086 "state": "completed", 00:17:06.086 "digest": "sha512", 00:17:06.086 "dhgroup": "ffdhe8192" 00:17:06.086 } 00:17:06.086 } 00:17:06.086 ]' 00:17:06.086 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.086 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:06.086 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.086 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:06.086 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.086 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.086 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.087 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.345 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:17:06.345 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:17:07.280 05:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.280 05:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:17:07.280 05:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.280 05:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.280 05:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.280 05:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:07.280 05:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:07.280 05:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:07.280 05:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:07.280 05:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:07.280 05:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:07.538 05:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:07.538 05:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.538 05:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:07.538 05:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:07.538 05:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:07.538 05:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.538 05:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.538 05:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.538 05:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.538 05:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.538 05:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.538 05:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.538 05:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.470 00:17:08.471 05:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.471 05:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.471 05:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.729 05:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.729 05:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.729 05:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.729 05:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.729 05:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.729 05:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.729 { 00:17:08.729 "cntlid": 145, 00:17:08.729 "qid": 0, 00:17:08.729 "state": "enabled", 00:17:08.729 "thread": "nvmf_tgt_poll_group_000", 00:17:08.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:17:08.729 "listen_address": { 00:17:08.729 "trtype": "TCP", 00:17:08.729 "adrfam": "IPv4", 00:17:08.729 "traddr": "10.0.0.3", 00:17:08.729 "trsvcid": "4420" 00:17:08.729 }, 00:17:08.729 "peer_address": { 00:17:08.729 "trtype": "TCP", 00:17:08.729 "adrfam": "IPv4", 00:17:08.729 "traddr": "10.0.0.1", 00:17:08.729 "trsvcid": "40544" 00:17:08.729 }, 00:17:08.729 "auth": { 00:17:08.729 "state": "completed", 00:17:08.729 "digest": "sha512", 00:17:08.729 "dhgroup": "ffdhe8192" 00:17:08.729 } 00:17:08.729 } 00:17:08.729 ]' 00:17:08.729 05:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.729 05:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.729 05:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.729 05:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:08.729 05:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.729 05:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.729 05:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.729 05:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.296 05:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:17:09.296 05:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:00:ZWY0YmEyMjU4ODA0NGEyOWEyYjQ5NGNjZDQ0ODgyNWNiZGUyOGQ2ZjhhNWE5N2Yylx8B5A==: --dhchap-ctrl-secret DHHC-1:03:MzE1MWJlYTNhOWQ3OGNmODQxZWQxZTA5OTA4ZWJlODRhY2VkYTQyOWM5NTM0YjFjZGRkMTcwZDgwNjA2NjAwME2eisk=: 00:17:09.868 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.868 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:17:09.868 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.868 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.868 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.868 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key1 00:17:09.868 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.868 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.868 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.868 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:09.868 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:09.868 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:09.868 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:09.868 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.868 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:09.868 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.868 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:09.868 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:09.868 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:10.435 request: 00:17:10.435 { 00:17:10.435 "name": "nvme0", 00:17:10.435 "trtype": "tcp", 00:17:10.435 "traddr": "10.0.0.3", 00:17:10.435 "adrfam": "ipv4", 00:17:10.435 "trsvcid": "4420", 00:17:10.435 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:10.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:17:10.435 "prchk_reftag": false, 00:17:10.435 "prchk_guard": false, 00:17:10.435 "hdgst": false, 00:17:10.435 "ddgst": false, 00:17:10.435 "dhchap_key": "key2", 00:17:10.435 "allow_unrecognized_csi": false, 00:17:10.435 "method": "bdev_nvme_attach_controller", 00:17:10.435 "req_id": 1 00:17:10.435 } 00:17:10.435 Got JSON-RPC error response 00:17:10.435 response: 00:17:10.435 { 00:17:10.435 "code": -5, 00:17:10.435 "message": "Input/output error" 00:17:10.435 } 00:17:10.435 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:10.435 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:10.435 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:10.435 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:10.435 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:17:10.435 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.435 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.435 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.435 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.435 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.435 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.435 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.435 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:10.435 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:10.435 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:10.435 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:10.435 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:10.435 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:10.435 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:10.435 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:10.435 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:10.435 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:11.370 request: 00:17:11.370 { 00:17:11.370 "name": "nvme0", 00:17:11.370 "trtype": "tcp", 00:17:11.370 "traddr": "10.0.0.3", 00:17:11.370 "adrfam": "ipv4", 00:17:11.370 "trsvcid": "4420", 00:17:11.370 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:11.370 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:17:11.370 "prchk_reftag": false, 00:17:11.370 "prchk_guard": false, 00:17:11.370 "hdgst": false, 00:17:11.370 "ddgst": false, 00:17:11.370 "dhchap_key": "key1", 00:17:11.370 "dhchap_ctrlr_key": "ckey2", 00:17:11.370 "allow_unrecognized_csi": false, 00:17:11.370 "method": "bdev_nvme_attach_controller", 00:17:11.370 "req_id": 1 00:17:11.370 } 00:17:11.370 Got JSON-RPC error response 00:17:11.370 response: 00:17:11.370 { 00:17:11.370 "code": -5, 00:17:11.370 "message": "Input/output error" 00:17:11.370 } 00:17:11.370 05:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:11.370 05:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:11.370 05:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:11.370 05:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:11.370 05:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:17:11.370 05:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.370 05:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.370 05:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.370 05:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key1 00:17:11.370 05:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.370 05:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.370 05:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.370 05:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.370 05:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:11.370 05:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.370 05:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:11.370 05:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:11.370 05:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:11.370 05:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:11.370 05:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.370 05:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.370 05:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.937 request: 00:17:11.937 { 00:17:11.937 "name": "nvme0", 00:17:11.937 "trtype": "tcp", 00:17:11.937 "traddr": "10.0.0.3", 00:17:11.937 "adrfam": "ipv4", 00:17:11.937 "trsvcid": "4420", 00:17:11.937 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:11.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:17:11.937 "prchk_reftag": false, 00:17:11.937 "prchk_guard": false, 00:17:11.937 "hdgst": false, 00:17:11.937 "ddgst": false, 00:17:11.937 "dhchap_key": "key1", 00:17:11.937 "dhchap_ctrlr_key": "ckey1", 00:17:11.937 "allow_unrecognized_csi": false, 00:17:11.937 "method": "bdev_nvme_attach_controller", 00:17:11.937 "req_id": 1 00:17:11.937 } 00:17:11.937 Got JSON-RPC error response 00:17:11.937 response: 00:17:11.937 { 00:17:11.937 "code": -5, 00:17:11.937 "message": "Input/output error" 00:17:11.937 } 00:17:11.937 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:11.937 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:11.937 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:11.937 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:11.937 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:17:11.937 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.937 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.937 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.937 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67525 00:17:11.937 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 67525 ']' 00:17:11.937 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 67525 00:17:11.938 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:17:11.938 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:11.938 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67525 00:17:11.938 killing process with pid 67525 00:17:11.938 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:11.938 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:11.938 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67525' 00:17:11.938 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 67525 00:17:11.938 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 67525 00:17:12.196 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:12.196 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:12.196 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:12.196 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.196 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70757 00:17:12.196 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70757 00:17:12.196 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:12.196 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 70757 ']' 00:17:12.196 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.197 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:12.197 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.197 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:12.197 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.455 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:12.456 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:17:12.456 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:12.456 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:12.456 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.456 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:12.456 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:12.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.456 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70757 00:17:12.456 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 70757 ']' 00:17:12.456 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.456 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:12.456 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.456 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:12.456 05:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.714 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:12.714 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:17:12.714 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:12.714 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.714 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.974 null0 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.R25 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.ch5 ]] 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ch5 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Jb7 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.RjT ]] 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RjT 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.kNo 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.I39 ]] 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.I39 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.8fd 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key3 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.974 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.911 nvme0n1 00:17:13.911 05:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.911 05:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.911 05:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.477 05:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.477 05:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.477 05:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.477 05:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.477 05:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.477 05:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.477 { 00:17:14.477 "cntlid": 1, 00:17:14.477 "qid": 0, 00:17:14.477 "state": "enabled", 00:17:14.477 "thread": "nvmf_tgt_poll_group_000", 00:17:14.477 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:17:14.477 "listen_address": { 00:17:14.477 "trtype": "TCP", 00:17:14.477 "adrfam": "IPv4", 00:17:14.477 "traddr": "10.0.0.3", 00:17:14.477 "trsvcid": "4420" 00:17:14.477 }, 00:17:14.477 "peer_address": { 00:17:14.477 "trtype": "TCP", 00:17:14.477 "adrfam": "IPv4", 00:17:14.477 "traddr": "10.0.0.1", 00:17:14.477 "trsvcid": "40604" 00:17:14.477 }, 00:17:14.477 "auth": { 00:17:14.477 "state": "completed", 00:17:14.477 "digest": "sha512", 00:17:14.477 "dhgroup": "ffdhe8192" 00:17:14.477 } 00:17:14.477 } 00:17:14.477 ]' 00:17:14.477 05:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.477 05:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:14.477 05:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.477 05:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:14.477 05:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.477 05:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.477 05:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.477 05:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.735 05:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:17:14.735 05:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:17:15.692 05:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.692 05:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:17:15.692 05:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.692 05:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.692 05:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.692 05:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key3 00:17:15.692 05:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.692 05:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.692 05:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.692 05:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:15.692 05:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:15.950 05:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:15.951 05:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:15.951 05:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:15.951 05:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:15.951 05:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:15.951 05:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:15.951 05:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:15.951 05:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:15.951 05:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.951 05:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.209 request: 00:17:16.209 { 00:17:16.209 "name": "nvme0", 00:17:16.209 "trtype": "tcp", 00:17:16.209 "traddr": "10.0.0.3", 00:17:16.209 "adrfam": "ipv4", 00:17:16.209 "trsvcid": "4420", 00:17:16.209 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:16.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:17:16.209 "prchk_reftag": false, 00:17:16.209 "prchk_guard": false, 00:17:16.209 "hdgst": false, 00:17:16.209 "ddgst": false, 00:17:16.209 "dhchap_key": "key3", 00:17:16.209 "allow_unrecognized_csi": false, 00:17:16.209 "method": "bdev_nvme_attach_controller", 00:17:16.209 "req_id": 1 00:17:16.209 } 00:17:16.209 Got JSON-RPC error response 00:17:16.209 response: 00:17:16.209 { 00:17:16.209 "code": -5, 00:17:16.209 "message": "Input/output error" 00:17:16.209 } 00:17:16.209 05:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:16.209 05:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:16.209 05:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:16.209 05:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:16.209 05:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:16.209 05:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:16.209 05:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:16.209 05:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:16.468 05:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:16.468 05:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:16.468 05:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:16.468 05:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:16.468 05:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.468 05:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:16.468 05:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.468 05:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:16.468 05:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.468 05:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.726 request: 00:17:16.726 { 00:17:16.726 "name": "nvme0", 00:17:16.726 "trtype": "tcp", 00:17:16.726 "traddr": "10.0.0.3", 00:17:16.726 "adrfam": "ipv4", 00:17:16.726 "trsvcid": "4420", 00:17:16.726 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:16.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:17:16.726 "prchk_reftag": false, 00:17:16.726 "prchk_guard": false, 00:17:16.726 "hdgst": false, 00:17:16.726 "ddgst": false, 00:17:16.726 "dhchap_key": "key3", 00:17:16.726 "allow_unrecognized_csi": false, 00:17:16.726 "method": "bdev_nvme_attach_controller", 00:17:16.726 "req_id": 1 00:17:16.726 } 00:17:16.726 Got JSON-RPC error response 00:17:16.726 response: 00:17:16.726 { 00:17:16.726 "code": -5, 00:17:16.726 "message": "Input/output error" 00:17:16.726 } 00:17:16.726 05:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:16.726 05:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:16.726 05:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:16.726 05:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:16.726 05:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:16.726 05:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:16.726 05:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:16.726 05:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:16.726 05:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:16.726 05:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:16.985 05:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:17:16.985 05:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.985 05:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.985 05:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.985 05:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:17:16.985 05:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.985 05:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.985 05:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.985 05:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:16.985 05:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:16.985 05:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:16.985 05:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:16.985 05:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.985 05:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:16.985 05:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.985 05:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:16.985 05:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:16.985 05:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:17.554 request: 00:17:17.554 { 00:17:17.554 "name": "nvme0", 00:17:17.554 "trtype": "tcp", 00:17:17.554 "traddr": "10.0.0.3", 00:17:17.554 "adrfam": "ipv4", 00:17:17.554 "trsvcid": "4420", 00:17:17.554 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:17.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:17:17.554 "prchk_reftag": false, 00:17:17.554 "prchk_guard": false, 00:17:17.554 "hdgst": false, 00:17:17.554 "ddgst": false, 00:17:17.554 "dhchap_key": "key0", 00:17:17.554 "dhchap_ctrlr_key": "key1", 00:17:17.554 "allow_unrecognized_csi": false, 00:17:17.554 "method": "bdev_nvme_attach_controller", 00:17:17.554 "req_id": 1 00:17:17.554 } 00:17:17.554 Got JSON-RPC error response 00:17:17.554 response: 00:17:17.554 { 00:17:17.554 "code": -5, 00:17:17.554 "message": "Input/output error" 00:17:17.554 } 00:17:17.554 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:17.554 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:17.554 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:17.554 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:17.554 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:17.554 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:17.554 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:18.120 nvme0n1 00:17:18.120 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:18.120 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:18.120 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.379 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.379 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.379 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.638 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key1 00:17:18.638 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.638 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.638 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.638 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:18.638 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:18.638 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:19.574 nvme0n1 00:17:19.574 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:19.574 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.574 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:20.142 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.142 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:20.142 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.142 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.142 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.142 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:20.142 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.142 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:20.401 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.401 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:17:20.401 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid 4bd82fc4-6e19-4d22-95c5-23a13095cd93 -l 0 --dhchap-secret DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: --dhchap-ctrl-secret DHHC-1:03:NDA4NTcwYTkzMjI0ZDkwZjlmYzAwZjQ1YTBhODFmZDQ4MGM2MTBkM2EwYmY5OWI0OGI4ZDUyZWM3MzJjNGI4NsvemQw=: 00:17:21.000 05:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:21.000 05:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:21.000 05:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:21.000 05:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:21.000 05:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:21.000 05:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:21.000 05:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:21.000 05:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.000 05:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.568 05:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:21.568 05:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:21.568 05:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:21.568 05:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:21.568 05:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.568 05:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:21.568 05:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.568 05:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:21.568 05:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:21.568 05:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:22.134 request: 00:17:22.134 { 00:17:22.134 "name": "nvme0", 00:17:22.134 "trtype": "tcp", 00:17:22.134 "traddr": "10.0.0.3", 00:17:22.134 "adrfam": "ipv4", 00:17:22.134 "trsvcid": "4420", 00:17:22.134 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:22.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93", 00:17:22.134 "prchk_reftag": false, 00:17:22.134 "prchk_guard": false, 00:17:22.134 "hdgst": false, 00:17:22.134 "ddgst": false, 00:17:22.135 "dhchap_key": "key1", 00:17:22.135 "allow_unrecognized_csi": false, 00:17:22.135 "method": "bdev_nvme_attach_controller", 00:17:22.135 "req_id": 1 00:17:22.135 } 00:17:22.135 Got JSON-RPC error response 00:17:22.135 response: 00:17:22.135 { 00:17:22.135 "code": -5, 00:17:22.135 "message": "Input/output error" 00:17:22.135 } 00:17:22.135 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:22.135 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:22.135 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:22.135 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:22.135 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:22.135 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:22.135 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:23.069 nvme0n1 00:17:23.069 05:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:23.069 05:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.069 05:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:23.328 05:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.328 05:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.328 05:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.896 05:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:17:23.896 05:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.896 05:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.896 05:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.896 05:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:23.896 05:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:23.896 05:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:24.154 nvme0n1 00:17:24.154 05:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:24.154 05:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:24.154 05:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.412 05:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.412 05:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.412 05:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.675 05:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:24.675 05:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.675 05:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.675 05:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.675 05:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: '' 2s 00:17:24.675 05:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:24.675 05:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:24.675 05:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: 00:17:24.675 05:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:24.675 05:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:24.675 05:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:24.675 05:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: ]] 00:17:24.675 05:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YTViMWM2NjdjMWQ5MjNlNGM2MDExNWU0OTlmNTZjZWMfzAEj: 00:17:24.675 05:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:24.675 05:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:24.675 05:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:27.209 05:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:27.209 05:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:17:27.209 05:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:27.209 05:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:17:27.209 05:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:17:27.209 05:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:17:27.209 05:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:17:27.209 05:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:27.209 05:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.209 05:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.209 05:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.209 05:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: 2s 00:17:27.209 05:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:27.209 05:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:27.209 05:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:27.209 05:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: 00:17:27.209 05:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:27.209 05:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:27.209 05:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:27.209 05:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: ]] 00:17:27.209 05:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZmRjNjZlZjllNWQ0MjIxNmZlYjEzMmY2Mzg2MDJlZmI3MDNiNDY5MGNjNDUwZjBif6eg1Q==: 00:17:27.209 05:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:27.209 05:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:29.107 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:29.107 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:17:29.107 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:29.107 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:17:29.107 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:17:29.107 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:17:29.107 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:17:29.107 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.107 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:29.107 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.107 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.107 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.107 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:29.107 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:29.107 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:30.042 nvme0n1 00:17:30.042 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:30.042 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.042 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.042 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.042 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:30.042 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:30.608 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:30.608 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.608 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:30.866 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.866 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:17:30.866 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.866 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.866 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.866 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:30.866 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:31.124 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:31.124 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:31.124 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.382 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.382 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:31.382 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.382 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.382 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.382 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:31.382 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:31.382 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:31.382 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:31.382 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:31.382 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:31.382 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:31.382 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:31.382 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:31.948 request: 00:17:31.948 { 00:17:31.948 "name": "nvme0", 00:17:31.948 "dhchap_key": "key1", 00:17:31.948 "dhchap_ctrlr_key": "key3", 00:17:31.948 "method": "bdev_nvme_set_keys", 00:17:31.948 "req_id": 1 00:17:31.948 } 00:17:31.948 Got JSON-RPC error response 00:17:31.948 response: 00:17:31.948 { 00:17:31.948 "code": -13, 00:17:31.948 "message": "Permission denied" 00:17:31.948 } 00:17:32.206 05:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:32.206 05:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:32.206 05:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:32.206 05:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:32.206 05:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:32.206 05:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.206 05:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:32.465 05:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:32.465 05:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:33.398 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:33.398 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:33.398 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.659 05:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:33.659 05:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:33.659 05:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.659 05:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.659 05:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.659 05:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:33.659 05:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:33.659 05:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:35.031 nvme0n1 00:17:35.031 05:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:35.031 05:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.031 05:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.031 05:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.031 05:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:35.031 05:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:35.031 05:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:35.031 05:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:35.031 05:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:35.031 05:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:35.031 05:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:35.031 05:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:35.031 05:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:35.600 request: 00:17:35.600 { 00:17:35.600 "name": "nvme0", 00:17:35.600 "dhchap_key": "key2", 00:17:35.600 "dhchap_ctrlr_key": "key0", 00:17:35.600 "method": "bdev_nvme_set_keys", 00:17:35.600 "req_id": 1 00:17:35.600 } 00:17:35.600 Got JSON-RPC error response 00:17:35.600 response: 00:17:35.600 { 00:17:35.600 "code": -13, 00:17:35.600 "message": "Permission denied" 00:17:35.600 } 00:17:35.600 05:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:35.600 05:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:35.600 05:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:35.600 05:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:35.600 05:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:35.600 05:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:35.600 05:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.861 05:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:35.861 05:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:36.816 05:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:36.817 05:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:36.817 05:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.086 05:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:37.086 05:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:37.086 05:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:37.086 05:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67549 00:17:37.086 05:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 67549 ']' 00:17:37.086 05:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 67549 00:17:37.357 05:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:17:37.357 05:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:37.357 05:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67549 00:17:37.357 05:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:37.357 05:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:37.357 killing process with pid 67549 00:17:37.357 05:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67549' 00:17:37.357 05:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 67549 00:17:37.357 05:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 67549 00:17:37.619 05:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:37.619 05:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:37.619 05:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:37.619 05:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:37.619 05:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:37.619 05:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:37.619 05:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:37.619 rmmod nvme_tcp 00:17:37.619 rmmod nvme_fabrics 00:17:37.619 rmmod nvme_keyring 00:17:37.619 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:37.619 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:37.619 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:37.619 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70757 ']' 00:17:37.619 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70757 00:17:37.619 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 70757 ']' 00:17:37.619 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 70757 00:17:37.619 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:17:37.619 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:37.620 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70757 00:17:37.620 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:37.620 killing process with pid 70757 00:17:37.620 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:37.620 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70757' 00:17:37.620 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 70757 00:17:37.620 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 70757 00:17:37.877 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:37.877 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:37.877 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:37.878 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:37.878 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:37.878 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:37.878 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:37.878 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:37.878 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:37.878 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:37.878 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:37.878 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:37.878 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:37.878 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:37.878 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:37.878 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:37.878 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:37.878 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:37.878 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:37.878 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:37.878 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:37.878 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:38.136 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:38.136 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.136 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:38.136 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.136 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:17:38.136 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.R25 /tmp/spdk.key-sha256.Jb7 /tmp/spdk.key-sha384.kNo /tmp/spdk.key-sha512.8fd /tmp/spdk.key-sha512.ch5 /tmp/spdk.key-sha384.RjT /tmp/spdk.key-sha256.I39 '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:17:38.136 00:17:38.136 real 3m27.684s 00:17:38.136 user 8m19.967s 00:17:38.136 sys 0m30.474s 00:17:38.136 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:38.136 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.136 ************************************ 00:17:38.136 END TEST nvmf_auth_target 00:17:38.136 ************************************ 00:17:38.136 05:27:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:38.136 05:27:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:38.136 05:27:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:38.136 05:27:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:38.136 05:27:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:38.136 ************************************ 00:17:38.136 START TEST nvmf_bdevio_no_huge 00:17:38.136 ************************************ 00:17:38.136 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:38.136 * Looking for test storage... 00:17:38.136 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:38.136 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:38.136 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:17:38.136 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:38.395 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:38.395 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:38.395 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:38.395 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:38.395 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:38.395 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:38.395 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:38.395 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:38.395 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:38.395 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:38.395 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:38.395 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:38.395 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:38.395 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:38.395 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:38.395 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:38.395 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:38.395 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:38.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.396 --rc genhtml_branch_coverage=1 00:17:38.396 --rc genhtml_function_coverage=1 00:17:38.396 --rc genhtml_legend=1 00:17:38.396 --rc geninfo_all_blocks=1 00:17:38.396 --rc geninfo_unexecuted_blocks=1 00:17:38.396 00:17:38.396 ' 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:38.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.396 --rc genhtml_branch_coverage=1 00:17:38.396 --rc genhtml_function_coverage=1 00:17:38.396 --rc genhtml_legend=1 00:17:38.396 --rc geninfo_all_blocks=1 00:17:38.396 --rc geninfo_unexecuted_blocks=1 00:17:38.396 00:17:38.396 ' 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:38.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.396 --rc genhtml_branch_coverage=1 00:17:38.396 --rc genhtml_function_coverage=1 00:17:38.396 --rc genhtml_legend=1 00:17:38.396 --rc geninfo_all_blocks=1 00:17:38.396 --rc geninfo_unexecuted_blocks=1 00:17:38.396 00:17:38.396 ' 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:38.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.396 --rc genhtml_branch_coverage=1 00:17:38.396 --rc genhtml_function_coverage=1 00:17:38.396 --rc genhtml_legend=1 00:17:38.396 --rc geninfo_all_blocks=1 00:17:38.396 --rc geninfo_unexecuted_blocks=1 00:17:38.396 00:17:38.396 ' 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:38.396 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:38.396 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:38.397 Cannot find device "nvmf_init_br" 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:38.397 Cannot find device "nvmf_init_br2" 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:38.397 Cannot find device "nvmf_tgt_br" 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:38.397 Cannot find device "nvmf_tgt_br2" 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:38.397 Cannot find device "nvmf_init_br" 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:38.397 Cannot find device "nvmf_init_br2" 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:38.397 Cannot find device "nvmf_tgt_br" 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:38.397 Cannot find device "nvmf_tgt_br2" 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:38.397 Cannot find device "nvmf_br" 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:38.397 Cannot find device "nvmf_init_if" 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:38.397 Cannot find device "nvmf_init_if2" 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:38.397 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:38.397 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:38.397 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:38.656 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:38.656 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:38.656 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:38.656 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:38.656 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:38.656 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:38.656 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:38.656 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:38.656 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:38.656 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:38.656 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:38.656 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:38.656 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:38.656 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:38.656 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:38.656 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:38.656 05:27:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:38.656 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:38.656 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:38.656 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:38.656 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:38.656 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:38.656 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:38.656 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:38.656 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:38.656 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:38.656 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:38.656 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:17:38.656 00:17:38.656 --- 10.0.0.3 ping statistics --- 00:17:38.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.656 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:17:38.656 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:38.656 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:38.656 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:17:38.656 00:17:38.656 --- 10.0.0.4 ping statistics --- 00:17:38.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.656 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:17:38.656 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:38.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:38.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:17:38.657 00:17:38.657 --- 10.0.0.1 ping statistics --- 00:17:38.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.657 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:17:38.657 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:38.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:38.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:17:38.657 00:17:38.657 --- 10.0.0.2 ping statistics --- 00:17:38.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.657 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:17:38.657 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:38.657 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:17:38.657 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:38.657 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:38.657 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:38.657 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:38.657 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:38.657 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:38.657 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:38.657 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:38.657 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:38.657 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:38.657 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:38.657 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=71406 00:17:38.657 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:38.657 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 71406 00:17:38.657 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 71406 ']' 00:17:38.657 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.657 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:38.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.657 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.657 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:38.657 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:38.657 [2024-11-20 05:27:53.154566] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:17:38.657 [2024-11-20 05:27:53.154680] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:38.915 [2024-11-20 05:27:53.318261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:38.915 [2024-11-20 05:27:53.378377] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.915 [2024-11-20 05:27:53.378434] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.915 [2024-11-20 05:27:53.378445] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.915 [2024-11-20 05:27:53.378454] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.915 [2024-11-20 05:27:53.378461] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.915 [2024-11-20 05:27:53.379196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:38.915 [2024-11-20 05:27:53.379290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:38.915 [2024-11-20 05:27:53.379347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:38.915 [2024-11-20 05:27:53.379350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:38.915 [2024-11-20 05:27:53.384967] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:39.844 [2024-11-20 05:27:54.237817] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:39.844 Malloc0 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:39.844 [2024-11-20 05:27:54.277195] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:39.844 { 00:17:39.844 "params": { 00:17:39.844 "name": "Nvme$subsystem", 00:17:39.844 "trtype": "$TEST_TRANSPORT", 00:17:39.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:39.844 "adrfam": "ipv4", 00:17:39.844 "trsvcid": "$NVMF_PORT", 00:17:39.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:39.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:39.844 "hdgst": ${hdgst:-false}, 00:17:39.844 "ddgst": ${ddgst:-false} 00:17:39.844 }, 00:17:39.844 "method": "bdev_nvme_attach_controller" 00:17:39.844 } 00:17:39.844 EOF 00:17:39.844 )") 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:39.844 05:27:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:39.845 "params": { 00:17:39.845 "name": "Nvme1", 00:17:39.845 "trtype": "tcp", 00:17:39.845 "traddr": "10.0.0.3", 00:17:39.845 "adrfam": "ipv4", 00:17:39.845 "trsvcid": "4420", 00:17:39.845 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.845 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:39.845 "hdgst": false, 00:17:39.845 "ddgst": false 00:17:39.845 }, 00:17:39.845 "method": "bdev_nvme_attach_controller" 00:17:39.845 }' 00:17:39.845 [2024-11-20 05:27:54.344883] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:17:39.845 [2024-11-20 05:27:54.345028] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71442 ] 00:17:40.101 [2024-11-20 05:27:54.547237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:40.358 [2024-11-20 05:27:54.644120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.358 [2024-11-20 05:27:54.644187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.358 [2024-11-20 05:27:54.644198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.358 [2024-11-20 05:27:54.673152] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:40.616 I/O targets: 00:17:40.616 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:40.616 00:17:40.616 00:17:40.616 CUnit - A unit testing framework for C - Version 2.1-3 00:17:40.616 http://cunit.sourceforge.net/ 00:17:40.616 00:17:40.616 00:17:40.616 Suite: bdevio tests on: Nvme1n1 00:17:40.616 Test: blockdev write read block ...passed 00:17:40.616 Test: blockdev write zeroes read block ...passed 00:17:40.616 Test: blockdev write zeroes read no split ...passed 00:17:40.616 Test: blockdev write zeroes read split ...passed 00:17:40.616 Test: blockdev write zeroes read split partial ...passed 00:17:40.616 Test: blockdev reset ...[2024-11-20 05:27:54.940547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:40.616 [2024-11-20 05:27:54.940719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1406310 (9): Bad file descriptor 00:17:40.616 [2024-11-20 05:27:54.959632] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:40.616 passed 00:17:40.616 Test: blockdev write read 8 blocks ...passed 00:17:40.616 Test: blockdev write read size > 128k ...passed 00:17:40.616 Test: blockdev write read invalid size ...passed 00:17:40.616 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:40.616 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:40.616 Test: blockdev write read max offset ...passed 00:17:40.616 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:40.616 Test: blockdev writev readv 8 blocks ...passed 00:17:40.616 Test: blockdev writev readv 30 x 1block ...passed 00:17:40.616 Test: blockdev writev readv block ...passed 00:17:40.616 Test: blockdev writev readv size > 128k ...passed 00:17:40.616 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:40.616 Test: blockdev comparev and writev ...[2024-11-20 05:27:54.972809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.616 [2024-11-20 05:27:54.972860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.616 [2024-11-20 05:27:54.972924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.616 [2024-11-20 05:27:54.972945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.616 [2024-11-20 05:27:54.973519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.616 [2024-11-20 05:27:54.973558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:40.616 [2024-11-20 05:27:54.973608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.616 [2024-11-20 05:27:54.973625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:40.616 [2024-11-20 05:27:54.974088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.616 [2024-11-20 05:27:54.974124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:40.616 [2024-11-20 05:27:54.974172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.616 [2024-11-20 05:27:54.974189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:40.616 [2024-11-20 05:27:54.974644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.616 [2024-11-20 05:27:54.974679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:40.616 [2024-11-20 05:27:54.974730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.616 [2024-11-20 05:27:54.974747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:40.616 passed 00:17:40.616 Test: blockdev nvme passthru rw ...passed 00:17:40.616 Test: blockdev nvme passthru vendor specific ...[2024-11-20 05:27:54.976142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:40.616 [2024-11-20 05:27:54.976181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:40.616 [2024-11-20 05:27:54.976401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:40.616 [2024-11-20 05:27:54.976436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:40.616 [2024-11-20 05:27:54.976643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:40.616 [2024-11-20 05:27:54.976678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:40.616 [2024-11-20 05:27:54.976878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:40.616 [2024-11-20 05:27:54.976921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:40.616 passed 00:17:40.616 Test: blockdev nvme admin passthru ...passed 00:17:40.616 Test: blockdev copy ...passed 00:17:40.616 00:17:40.616 Run Summary: Type Total Ran Passed Failed Inactive 00:17:40.616 suites 1 1 n/a 0 0 00:17:40.616 tests 23 23 23 0 0 00:17:40.616 asserts 152 152 152 0 n/a 00:17:40.616 00:17:40.616 Elapsed time = 0.187 seconds 00:17:41.190 05:27:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:41.190 05:27:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.190 05:27:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:41.190 05:27:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.190 05:27:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:41.190 05:27:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:41.190 05:27:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:41.190 05:27:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:41.190 05:27:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:41.190 05:27:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:41.190 05:27:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:41.190 05:27:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:41.190 rmmod nvme_tcp 00:17:41.190 rmmod nvme_fabrics 00:17:41.190 rmmod nvme_keyring 00:17:41.190 05:27:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:41.190 05:27:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:41.190 05:27:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:41.190 05:27:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 71406 ']' 00:17:41.190 05:27:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 71406 00:17:41.190 05:27:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 71406 ']' 00:17:41.190 05:27:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 71406 00:17:41.190 05:27:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:17:41.190 05:27:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:41.190 05:27:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71406 00:17:41.190 killing process with pid 71406 00:17:41.190 05:27:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:17:41.190 05:27:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:17:41.190 05:27:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71406' 00:17:41.190 05:27:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 71406 00:17:41.190 05:27:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 71406 00:17:41.755 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:41.755 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:41.755 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:41.755 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:41.755 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:41.755 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:41.755 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:41.755 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:41.756 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:41.756 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:41.756 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:41.756 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:41.756 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:41.756 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:41.756 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:41.756 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:41.756 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:41.756 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:41.756 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:41.756 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:41.756 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:41.756 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:41.756 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:41.756 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.756 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:41.756 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.013 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:17:42.013 00:17:42.013 real 0m3.793s 00:17:42.013 user 0m12.327s 00:17:42.013 sys 0m1.505s 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:42.014 ************************************ 00:17:42.014 END TEST nvmf_bdevio_no_huge 00:17:42.014 ************************************ 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:42.014 ************************************ 00:17:42.014 START TEST nvmf_tls 00:17:42.014 ************************************ 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:42.014 * Looking for test storage... 00:17:42.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:42.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.014 --rc genhtml_branch_coverage=1 00:17:42.014 --rc genhtml_function_coverage=1 00:17:42.014 --rc genhtml_legend=1 00:17:42.014 --rc geninfo_all_blocks=1 00:17:42.014 --rc geninfo_unexecuted_blocks=1 00:17:42.014 00:17:42.014 ' 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:42.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.014 --rc genhtml_branch_coverage=1 00:17:42.014 --rc genhtml_function_coverage=1 00:17:42.014 --rc genhtml_legend=1 00:17:42.014 --rc geninfo_all_blocks=1 00:17:42.014 --rc geninfo_unexecuted_blocks=1 00:17:42.014 00:17:42.014 ' 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:42.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.014 --rc genhtml_branch_coverage=1 00:17:42.014 --rc genhtml_function_coverage=1 00:17:42.014 --rc genhtml_legend=1 00:17:42.014 --rc geninfo_all_blocks=1 00:17:42.014 --rc geninfo_unexecuted_blocks=1 00:17:42.014 00:17:42.014 ' 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:42.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.014 --rc genhtml_branch_coverage=1 00:17:42.014 --rc genhtml_function_coverage=1 00:17:42.014 --rc genhtml_legend=1 00:17:42.014 --rc geninfo_all_blocks=1 00:17:42.014 --rc geninfo_unexecuted_blocks=1 00:17:42.014 00:17:42.014 ' 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.014 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.015 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:42.015 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.015 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:42.015 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:42.015 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:42.015 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:42.015 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:42.015 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:42.015 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:42.015 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:42.015 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:42.015 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:42.015 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:42.273 Cannot find device "nvmf_init_br" 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:42.273 Cannot find device "nvmf_init_br2" 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:42.273 Cannot find device "nvmf_tgt_br" 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:42.273 Cannot find device "nvmf_tgt_br2" 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:42.273 Cannot find device "nvmf_init_br" 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:42.273 Cannot find device "nvmf_init_br2" 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:42.273 Cannot find device "nvmf_tgt_br" 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:42.273 Cannot find device "nvmf_tgt_br2" 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:42.273 Cannot find device "nvmf_br" 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:42.273 Cannot find device "nvmf_init_if" 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:42.273 Cannot find device "nvmf_init_if2" 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:42.273 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:42.273 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:42.273 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:42.274 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:42.274 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:42.274 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:42.274 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:42.532 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:42.532 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:17:42.532 00:17:42.532 --- 10.0.0.3 ping statistics --- 00:17:42.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.532 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:42.532 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:42.532 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:17:42.532 00:17:42.532 --- 10.0.0.4 ping statistics --- 00:17:42.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.532 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:42.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:42.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:42.532 00:17:42.532 --- 10.0.0.1 ping statistics --- 00:17:42.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.532 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:42.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:42.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:17:42.532 00:17:42.532 --- 10.0.0.2 ping statistics --- 00:17:42.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.532 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:42.532 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:42.533 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:42.533 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:42.533 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:42.533 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:42.533 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:42.533 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:42.533 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:42.533 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71680 00:17:42.533 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:42.533 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71680 00:17:42.533 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71680 ']' 00:17:42.533 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.533 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:42.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.533 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.533 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:42.533 05:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:42.533 [2024-11-20 05:27:57.012024] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:17:42.533 [2024-11-20 05:27:57.012125] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.791 [2024-11-20 05:27:57.164882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.791 [2024-11-20 05:27:57.210236] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.791 [2024-11-20 05:27:57.210325] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.791 [2024-11-20 05:27:57.210346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:42.791 [2024-11-20 05:27:57.210362] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:42.791 [2024-11-20 05:27:57.210374] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.791 [2024-11-20 05:27:57.210774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.791 05:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:42.791 05:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:17:42.791 05:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:42.791 05:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:42.791 05:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.049 05:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.049 05:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:43.049 05:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:43.049 true 00:17:43.306 05:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:43.306 05:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:43.563 05:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:43.563 05:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:43.563 05:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:43.821 05:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:43.821 05:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:44.079 05:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:44.079 05:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:44.079 05:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:44.340 05:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:44.340 05:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:44.602 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:44.602 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:44.602 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:44.602 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:45.181 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:45.181 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:45.181 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:45.181 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:45.181 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:45.764 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:45.764 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:45.764 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:46.035 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:46.035 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:46.296 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:46.296 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:46.296 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:46.296 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:46.296 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:46.297 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:46.297 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:46.297 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:46.297 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:46.297 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:46.297 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:46.297 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:46.297 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:46.297 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:46.297 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:46.297 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:46.297 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:46.297 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:46.297 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:46.297 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.wlfSn2Prba 00:17:46.297 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:46.297 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.j480GJB7h3 00:17:46.297 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:46.297 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:46.297 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.wlfSn2Prba 00:17:46.297 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.j480GJB7h3 00:17:46.297 05:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:46.862 05:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:46.863 [2024-11-20 05:28:01.357642] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:47.120 05:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.wlfSn2Prba 00:17:47.120 05:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.wlfSn2Prba 00:17:47.120 05:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:47.378 [2024-11-20 05:28:01.693737] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:47.378 05:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:47.635 05:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:47.892 [2024-11-20 05:28:02.333882] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:47.892 [2024-11-20 05:28:02.334193] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:47.892 05:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:48.459 malloc0 00:17:48.459 05:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:48.717 05:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.wlfSn2Prba 00:17:49.283 05:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:49.540 05:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.wlfSn2Prba 00:18:01.758 Initializing NVMe Controllers 00:18:01.758 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:01.758 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:01.758 Initialization complete. Launching workers. 00:18:01.758 ======================================================== 00:18:01.758 Latency(us) 00:18:01.758 Device Information : IOPS MiB/s Average min max 00:18:01.758 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8213.42 32.08 7795.18 2975.36 15954.05 00:18:01.758 ======================================================== 00:18:01.758 Total : 8213.42 32.08 7795.18 2975.36 15954.05 00:18:01.758 00:18:01.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:01.758 05:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wlfSn2Prba 00:18:01.758 05:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:01.758 05:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:01.758 05:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:01.758 05:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.wlfSn2Prba 00:18:01.758 05:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:01.758 05:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71924 00:18:01.758 05:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:01.758 05:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:01.758 05:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71924 /var/tmp/bdevperf.sock 00:18:01.758 05:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71924 ']' 00:18:01.758 05:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:01.758 05:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:01.758 05:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:01.758 05:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:01.758 05:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.758 [2024-11-20 05:28:14.206948] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:18:01.758 [2024-11-20 05:28:14.207100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71924 ] 00:18:01.758 [2024-11-20 05:28:14.359636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.758 [2024-11-20 05:28:14.394064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.758 [2024-11-20 05:28:14.426455] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:01.758 05:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:01.758 05:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:01.758 05:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wlfSn2Prba 00:18:01.758 05:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:01.758 [2024-11-20 05:28:15.115205] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:01.758 TLSTESTn1 00:18:01.758 05:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:01.758 Running I/O for 10 seconds... 00:18:02.949 3597.00 IOPS, 14.05 MiB/s [2024-11-20T05:28:18.397Z] 3739.50 IOPS, 14.61 MiB/s [2024-11-20T05:28:19.775Z] 3767.67 IOPS, 14.72 MiB/s [2024-11-20T05:28:20.711Z] 3763.00 IOPS, 14.70 MiB/s [2024-11-20T05:28:21.645Z] 3787.00 IOPS, 14.79 MiB/s [2024-11-20T05:28:22.634Z] 3795.50 IOPS, 14.83 MiB/s [2024-11-20T05:28:23.569Z] 3817.86 IOPS, 14.91 MiB/s [2024-11-20T05:28:24.504Z] 3825.75 IOPS, 14.94 MiB/s [2024-11-20T05:28:25.441Z] 3829.00 IOPS, 14.96 MiB/s [2024-11-20T05:28:25.441Z] 3810.20 IOPS, 14.88 MiB/s 00:18:10.928 Latency(us) 00:18:10.928 [2024-11-20T05:28:25.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.928 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:10.928 Verification LBA range: start 0x0 length 0x2000 00:18:10.928 TLSTESTn1 : 10.04 3809.34 14.88 0.00 0.00 33512.39 7745.16 32648.84 00:18:10.928 [2024-11-20T05:28:25.441Z] =================================================================================================================== 00:18:10.928 [2024-11-20T05:28:25.441Z] Total : 3809.34 14.88 0.00 0.00 33512.39 7745.16 32648.84 00:18:10.928 { 00:18:10.928 "results": [ 00:18:10.928 { 00:18:10.928 "job": "TLSTESTn1", 00:18:10.928 "core_mask": "0x4", 00:18:10.928 "workload": "verify", 00:18:10.928 "status": "finished", 00:18:10.928 "verify_range": { 00:18:10.928 "start": 0, 00:18:10.928 "length": 8192 00:18:10.928 }, 00:18:10.928 "queue_depth": 128, 00:18:10.928 "io_size": 4096, 00:18:10.928 "runtime": 10.035336, 00:18:10.928 "iops": 3809.339318583852, 00:18:10.928 "mibps": 14.880231713218173, 00:18:10.928 "io_failed": 0, 00:18:10.928 "io_timeout": 0, 00:18:10.928 "avg_latency_us": 33512.38752061792, 00:18:10.928 "min_latency_us": 7745.163636363636, 00:18:10.928 "max_latency_us": 32648.843636363636 00:18:10.928 } 00:18:10.928 ], 00:18:10.928 "core_count": 1 00:18:10.928 } 00:18:10.928 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:10.928 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71924 00:18:10.928 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71924 ']' 00:18:10.929 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71924 00:18:10.929 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:10.929 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:10.929 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71924 00:18:10.929 killing process with pid 71924 00:18:10.929 Received shutdown signal, test time was about 10.000000 seconds 00:18:10.929 00:18:10.929 Latency(us) 00:18:10.929 [2024-11-20T05:28:25.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.929 [2024-11-20T05:28:25.442Z] =================================================================================================================== 00:18:10.929 [2024-11-20T05:28:25.442Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:10.929 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:10.929 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:10.929 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71924' 00:18:10.929 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71924 00:18:10.929 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71924 00:18:11.187 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.j480GJB7h3 00:18:11.187 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:11.187 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.j480GJB7h3 00:18:11.187 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:11.187 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:11.187 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:11.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:11.187 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:11.187 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.j480GJB7h3 00:18:11.187 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:11.187 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:11.187 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:11.187 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.j480GJB7h3 00:18:11.187 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:11.187 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72053 00:18:11.187 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:11.187 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:11.187 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72053 /var/tmp/bdevperf.sock 00:18:11.187 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72053 ']' 00:18:11.187 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:11.187 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:11.187 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:11.187 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:11.187 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.187 [2024-11-20 05:28:25.617474] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:18:11.187 [2024-11-20 05:28:25.617610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72053 ] 00:18:11.446 [2024-11-20 05:28:25.762309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.446 [2024-11-20 05:28:25.796809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:11.446 [2024-11-20 05:28:25.828277] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:11.446 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:11.446 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:11.446 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.j480GJB7h3 00:18:12.014 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:12.273 [2024-11-20 05:28:26.553736] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:12.273 [2024-11-20 05:28:26.558715] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:12.273 [2024-11-20 05:28:26.559331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x184afb0 (107): Transport endpoint is not connected 00:18:12.273 [2024-11-20 05:28:26.560319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x184afb0 (9): Bad file descriptor 00:18:12.273 [2024-11-20 05:28:26.561314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:12.273 [2024-11-20 05:28:26.561340] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:18:12.273 [2024-11-20 05:28:26.561351] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:12.273 [2024-11-20 05:28:26.561366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:12.273 request: 00:18:12.274 { 00:18:12.274 "name": "TLSTEST", 00:18:12.274 "trtype": "tcp", 00:18:12.274 "traddr": "10.0.0.3", 00:18:12.274 "adrfam": "ipv4", 00:18:12.274 "trsvcid": "4420", 00:18:12.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:12.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:12.274 "prchk_reftag": false, 00:18:12.274 "prchk_guard": false, 00:18:12.274 "hdgst": false, 00:18:12.274 "ddgst": false, 00:18:12.274 "psk": "key0", 00:18:12.274 "allow_unrecognized_csi": false, 00:18:12.274 "method": "bdev_nvme_attach_controller", 00:18:12.274 "req_id": 1 00:18:12.274 } 00:18:12.274 Got JSON-RPC error response 00:18:12.274 response: 00:18:12.274 { 00:18:12.274 "code": -5, 00:18:12.274 "message": "Input/output error" 00:18:12.274 } 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72053 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72053 ']' 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72053 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72053 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:12.274 killing process with pid 72053 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72053' 00:18:12.274 Received shutdown signal, test time was about 10.000000 seconds 00:18:12.274 00:18:12.274 Latency(us) 00:18:12.274 [2024-11-20T05:28:26.787Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.274 [2024-11-20T05:28:26.787Z] =================================================================================================================== 00:18:12.274 [2024-11-20T05:28:26.787Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72053 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72053 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.wlfSn2Prba 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.wlfSn2Prba 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.wlfSn2Prba 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.wlfSn2Prba 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72074 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72074 /var/tmp/bdevperf.sock 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72074 ']' 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:12.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:12.274 05:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.533 [2024-11-20 05:28:26.800240] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:18:12.533 [2024-11-20 05:28:26.800333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72074 ] 00:18:12.533 [2024-11-20 05:28:26.945447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.533 [2024-11-20 05:28:26.997925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:12.533 [2024-11-20 05:28:27.038798] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:12.792 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:12.792 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:12.792 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wlfSn2Prba 00:18:13.051 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:13.322 [2024-11-20 05:28:27.668971] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:13.322 [2024-11-20 05:28:27.676178] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:13.322 [2024-11-20 05:28:27.676242] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:13.322 [2024-11-20 05:28:27.676318] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:13.322 [2024-11-20 05:28:27.676647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a2fb0 (107): Transport endpoint is not connected 00:18:13.322 [2024-11-20 05:28:27.677633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a2fb0 (9): Bad file descriptor 00:18:13.322 [2024-11-20 05:28:27.678629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:13.322 [2024-11-20 05:28:27.678658] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:18:13.322 [2024-11-20 05:28:27.678670] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:13.322 [2024-11-20 05:28:27.678694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:13.322 request: 00:18:13.322 { 00:18:13.322 "name": "TLSTEST", 00:18:13.322 "trtype": "tcp", 00:18:13.322 "traddr": "10.0.0.3", 00:18:13.322 "adrfam": "ipv4", 00:18:13.322 "trsvcid": "4420", 00:18:13.322 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.322 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:13.322 "prchk_reftag": false, 00:18:13.322 "prchk_guard": false, 00:18:13.322 "hdgst": false, 00:18:13.322 "ddgst": false, 00:18:13.322 "psk": "key0", 00:18:13.322 "allow_unrecognized_csi": false, 00:18:13.322 "method": "bdev_nvme_attach_controller", 00:18:13.322 "req_id": 1 00:18:13.322 } 00:18:13.322 Got JSON-RPC error response 00:18:13.322 response: 00:18:13.322 { 00:18:13.322 "code": -5, 00:18:13.322 "message": "Input/output error" 00:18:13.322 } 00:18:13.322 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72074 00:18:13.322 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72074 ']' 00:18:13.322 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72074 00:18:13.322 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:13.322 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:13.322 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72074 00:18:13.322 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:13.322 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:13.322 killing process with pid 72074 00:18:13.322 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72074' 00:18:13.322 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72074 00:18:13.322 Received shutdown signal, test time was about 10.000000 seconds 00:18:13.322 00:18:13.322 Latency(us) 00:18:13.322 [2024-11-20T05:28:27.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.322 [2024-11-20T05:28:27.835Z] =================================================================================================================== 00:18:13.322 [2024-11-20T05:28:27.835Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:13.322 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72074 00:18:13.618 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:13.618 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:13.618 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:13.618 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:13.618 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:13.618 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.wlfSn2Prba 00:18:13.618 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:13.618 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.wlfSn2Prba 00:18:13.618 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:13.618 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:13.618 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:13.618 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:13.618 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.wlfSn2Prba 00:18:13.618 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:13.618 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:13.618 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:13.618 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.wlfSn2Prba 00:18:13.618 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.618 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:13.618 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72095 00:18:13.618 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:13.618 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72095 /var/tmp/bdevperf.sock 00:18:13.618 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72095 ']' 00:18:13.618 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.618 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:13.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.618 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.618 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:13.618 05:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.618 [2024-11-20 05:28:27.922115] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:18:13.618 [2024-11-20 05:28:27.922223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72095 ] 00:18:13.618 [2024-11-20 05:28:28.061007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.618 [2024-11-20 05:28:28.094503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.618 [2024-11-20 05:28:28.124640] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:13.877 05:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:13.877 05:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:13.877 05:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wlfSn2Prba 00:18:14.134 05:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:14.393 [2024-11-20 05:28:28.864637] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:14.393 [2024-11-20 05:28:28.876223] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:14.393 [2024-11-20 05:28:28.876276] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:14.393 [2024-11-20 05:28:28.876341] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:14.393 [2024-11-20 05:28:28.876382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a3fb0 (107): Transport endpoint is not connected 00:18:14.393 [2024-11-20 05:28:28.877356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a3fb0 (9): Bad file descriptor 00:18:14.393 [2024-11-20 05:28:28.878352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:14.393 [2024-11-20 05:28:28.878387] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:18:14.393 [2024-11-20 05:28:28.878400] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:14.393 [2024-11-20 05:28:28.878417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:14.393 request: 00:18:14.393 { 00:18:14.393 "name": "TLSTEST", 00:18:14.393 "trtype": "tcp", 00:18:14.393 "traddr": "10.0.0.3", 00:18:14.393 "adrfam": "ipv4", 00:18:14.393 "trsvcid": "4420", 00:18:14.393 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:14.393 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:14.393 "prchk_reftag": false, 00:18:14.393 "prchk_guard": false, 00:18:14.393 "hdgst": false, 00:18:14.393 "ddgst": false, 00:18:14.393 "psk": "key0", 00:18:14.393 "allow_unrecognized_csi": false, 00:18:14.393 "method": "bdev_nvme_attach_controller", 00:18:14.393 "req_id": 1 00:18:14.393 } 00:18:14.393 Got JSON-RPC error response 00:18:14.393 response: 00:18:14.393 { 00:18:14.393 "code": -5, 00:18:14.393 "message": "Input/output error" 00:18:14.393 } 00:18:14.651 05:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72095 00:18:14.651 05:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72095 ']' 00:18:14.651 05:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72095 00:18:14.651 05:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:14.651 05:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:14.651 05:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72095 00:18:14.651 05:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:14.651 05:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:14.651 killing process with pid 72095 00:18:14.651 05:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72095' 00:18:14.651 05:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72095 00:18:14.651 Received shutdown signal, test time was about 10.000000 seconds 00:18:14.651 00:18:14.651 Latency(us) 00:18:14.651 [2024-11-20T05:28:29.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.651 [2024-11-20T05:28:29.164Z] =================================================================================================================== 00:18:14.651 [2024-11-20T05:28:29.164Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:14.651 05:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72095 00:18:14.651 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:14.651 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:14.651 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:14.651 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:14.651 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:14.651 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:14.651 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:14.651 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:14.651 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:14.651 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:14.652 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:14.652 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:14.652 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:14.652 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:14.652 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:14.652 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:14.652 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:14.652 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:14.652 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72122 00:18:14.652 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:14.652 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72122 /var/tmp/bdevperf.sock 00:18:14.652 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:14.652 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72122 ']' 00:18:14.652 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:14.652 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:14.652 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:14.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:14.652 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:14.652 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.652 [2024-11-20 05:28:29.127102] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:18:14.652 [2024-11-20 05:28:29.127196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72122 ] 00:18:14.908 [2024-11-20 05:28:29.274507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.908 [2024-11-20 05:28:29.308071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.908 [2024-11-20 05:28:29.338483] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:14.908 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:14.908 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:14.908 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:15.473 [2024-11-20 05:28:29.710500] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:15.473 [2024-11-20 05:28:29.710567] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:15.473 request: 00:18:15.473 { 00:18:15.473 "name": "key0", 00:18:15.473 "path": "", 00:18:15.473 "method": "keyring_file_add_key", 00:18:15.473 "req_id": 1 00:18:15.473 } 00:18:15.473 Got JSON-RPC error response 00:18:15.473 response: 00:18:15.473 { 00:18:15.473 "code": -1, 00:18:15.473 "message": "Operation not permitted" 00:18:15.473 } 00:18:15.473 05:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:15.731 [2024-11-20 05:28:30.054692] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:15.731 [2024-11-20 05:28:30.054768] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:15.731 request: 00:18:15.731 { 00:18:15.731 "name": "TLSTEST", 00:18:15.731 "trtype": "tcp", 00:18:15.731 "traddr": "10.0.0.3", 00:18:15.731 "adrfam": "ipv4", 00:18:15.731 "trsvcid": "4420", 00:18:15.731 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.731 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:15.731 "prchk_reftag": false, 00:18:15.731 "prchk_guard": false, 00:18:15.731 "hdgst": false, 00:18:15.731 "ddgst": false, 00:18:15.731 "psk": "key0", 00:18:15.731 "allow_unrecognized_csi": false, 00:18:15.731 "method": "bdev_nvme_attach_controller", 00:18:15.731 "req_id": 1 00:18:15.731 } 00:18:15.731 Got JSON-RPC error response 00:18:15.731 response: 00:18:15.731 { 00:18:15.731 "code": -126, 00:18:15.731 "message": "Required key not available" 00:18:15.731 } 00:18:15.731 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72122 00:18:15.731 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72122 ']' 00:18:15.731 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72122 00:18:15.731 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:15.731 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:15.731 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72122 00:18:15.731 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:15.731 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:15.731 killing process with pid 72122 00:18:15.731 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72122' 00:18:15.731 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72122 00:18:15.731 Received shutdown signal, test time was about 10.000000 seconds 00:18:15.731 00:18:15.731 Latency(us) 00:18:15.731 [2024-11-20T05:28:30.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.732 [2024-11-20T05:28:30.245Z] =================================================================================================================== 00:18:15.732 [2024-11-20T05:28:30.245Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:15.732 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72122 00:18:15.732 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:15.732 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:15.732 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:15.732 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71680 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71680 ']' 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71680 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71680 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:15.990 killing process with pid 71680 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71680' 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71680 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71680 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.vy9cFWYoW2 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.vy9cFWYoW2 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72153 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72153 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72153 ']' 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:15.990 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.249 [2024-11-20 05:28:30.536538] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:18:16.249 [2024-11-20 05:28:30.536660] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.249 [2024-11-20 05:28:30.694452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.249 [2024-11-20 05:28:30.726258] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.249 [2024-11-20 05:28:30.726323] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.249 [2024-11-20 05:28:30.726334] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.249 [2024-11-20 05:28:30.726342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.249 [2024-11-20 05:28:30.726349] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.249 [2024-11-20 05:28:30.726661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.249 [2024-11-20 05:28:30.756036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:16.508 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:16.508 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:16.508 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:16.508 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:16.508 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.508 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.508 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.vy9cFWYoW2 00:18:16.508 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.vy9cFWYoW2 00:18:16.508 05:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:16.766 [2024-11-20 05:28:31.135842] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.766 05:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:17.025 05:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:18:17.284 [2024-11-20 05:28:31.788091] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:17.284 [2024-11-20 05:28:31.788740] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:17.542 05:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:17.800 malloc0 00:18:17.800 05:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:18.059 05:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.vy9cFWYoW2 00:18:18.316 05:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:18.574 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vy9cFWYoW2 00:18:18.574 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:18.574 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:18.574 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:18.574 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.vy9cFWYoW2 00:18:18.574 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:18.574 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:18.574 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72207 00:18:18.574 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:18.574 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72207 /var/tmp/bdevperf.sock 00:18:18.574 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72207 ']' 00:18:18.574 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:18.574 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:18.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:18.574 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:18.574 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:18.574 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.833 [2024-11-20 05:28:33.110168] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:18:18.833 [2024-11-20 05:28:33.110737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72207 ] 00:18:18.833 [2024-11-20 05:28:33.258221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.833 [2024-11-20 05:28:33.300381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.833 [2024-11-20 05:28:33.336001] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:19.091 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:19.091 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:19.091 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vy9cFWYoW2 00:18:19.350 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:19.609 [2024-11-20 05:28:33.923602] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:19.609 TLSTESTn1 00:18:19.609 05:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:19.868 Running I/O for 10 seconds... 00:18:21.740 3715.00 IOPS, 14.51 MiB/s [2024-11-20T05:28:37.190Z] 3691.00 IOPS, 14.42 MiB/s [2024-11-20T05:28:38.567Z] 3733.33 IOPS, 14.58 MiB/s [2024-11-20T05:28:39.503Z] 3754.50 IOPS, 14.67 MiB/s [2024-11-20T05:28:40.441Z] 3780.60 IOPS, 14.77 MiB/s [2024-11-20T05:28:41.476Z] 3798.67 IOPS, 14.84 MiB/s [2024-11-20T05:28:42.411Z] 3771.14 IOPS, 14.73 MiB/s [2024-11-20T05:28:43.345Z] 3770.25 IOPS, 14.73 MiB/s [2024-11-20T05:28:44.281Z] 3736.78 IOPS, 14.60 MiB/s [2024-11-20T05:28:44.281Z] 3713.20 IOPS, 14.50 MiB/s 00:18:29.768 Latency(us) 00:18:29.768 [2024-11-20T05:28:44.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.769 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:29.769 Verification LBA range: start 0x0 length 0x2000 00:18:29.769 TLSTESTn1 : 10.02 3719.01 14.53 0.00 0.00 34362.17 5749.29 30742.34 00:18:29.769 [2024-11-20T05:28:44.282Z] =================================================================================================================== 00:18:29.769 [2024-11-20T05:28:44.282Z] Total : 3719.01 14.53 0.00 0.00 34362.17 5749.29 30742.34 00:18:29.769 { 00:18:29.769 "results": [ 00:18:29.769 { 00:18:29.769 "job": "TLSTESTn1", 00:18:29.769 "core_mask": "0x4", 00:18:29.769 "workload": "verify", 00:18:29.769 "status": "finished", 00:18:29.769 "verify_range": { 00:18:29.769 "start": 0, 00:18:29.769 "length": 8192 00:18:29.769 }, 00:18:29.769 "queue_depth": 128, 00:18:29.769 "io_size": 4096, 00:18:29.769 "runtime": 10.018517, 00:18:29.769 "iops": 3719.013502697056, 00:18:29.769 "mibps": 14.527396494910375, 00:18:29.769 "io_failed": 0, 00:18:29.769 "io_timeout": 0, 00:18:29.769 "avg_latency_us": 34362.17448513965, 00:18:29.769 "min_latency_us": 5749.294545454545, 00:18:29.769 "max_latency_us": 30742.34181818182 00:18:29.769 } 00:18:29.769 ], 00:18:29.769 "core_count": 1 00:18:29.769 } 00:18:29.769 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:29.769 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 72207 00:18:29.769 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72207 ']' 00:18:29.769 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72207 00:18:29.769 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:29.769 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:29.769 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72207 00:18:29.769 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:29.769 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:29.769 killing process with pid 72207 00:18:29.769 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72207' 00:18:29.769 Received shutdown signal, test time was about 10.000000 seconds 00:18:29.769 00:18:29.769 Latency(us) 00:18:29.769 [2024-11-20T05:28:44.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.769 [2024-11-20T05:28:44.282Z] =================================================================================================================== 00:18:29.769 [2024-11-20T05:28:44.282Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:29.769 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72207 00:18:29.769 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72207 00:18:30.028 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.vy9cFWYoW2 00:18:30.028 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vy9cFWYoW2 00:18:30.028 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:30.028 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vy9cFWYoW2 00:18:30.028 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:30.028 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:30.028 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:30.028 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:30.028 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vy9cFWYoW2 00:18:30.028 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:30.028 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:30.028 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:30.028 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.vy9cFWYoW2 00:18:30.028 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:30.028 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72335 00:18:30.028 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:30.028 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:30.028 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72335 /var/tmp/bdevperf.sock 00:18:30.028 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72335 ']' 00:18:30.028 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:30.028 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:30.028 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:30.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:30.028 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:30.028 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.028 [2024-11-20 05:28:44.409773] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:18:30.028 [2024-11-20 05:28:44.410745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72335 ] 00:18:30.287 [2024-11-20 05:28:44.558097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.287 [2024-11-20 05:28:44.593921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:30.287 [2024-11-20 05:28:44.625133] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:30.287 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:30.287 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:30.287 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vy9cFWYoW2 00:18:30.547 [2024-11-20 05:28:45.058144] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.vy9cFWYoW2': 0100666 00:18:30.804 [2024-11-20 05:28:45.058832] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:30.804 request: 00:18:30.804 { 00:18:30.805 "name": "key0", 00:18:30.805 "path": "/tmp/tmp.vy9cFWYoW2", 00:18:30.805 "method": "keyring_file_add_key", 00:18:30.805 "req_id": 1 00:18:30.805 } 00:18:30.805 Got JSON-RPC error response 00:18:30.805 response: 00:18:30.805 { 00:18:30.805 "code": -1, 00:18:30.805 "message": "Operation not permitted" 00:18:30.805 } 00:18:30.805 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:31.063 [2024-11-20 05:28:45.370312] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:31.063 [2024-11-20 05:28:45.370733] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:31.063 request: 00:18:31.063 { 00:18:31.063 "name": "TLSTEST", 00:18:31.063 "trtype": "tcp", 00:18:31.063 "traddr": "10.0.0.3", 00:18:31.063 "adrfam": "ipv4", 00:18:31.063 "trsvcid": "4420", 00:18:31.063 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.063 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:31.063 "prchk_reftag": false, 00:18:31.063 "prchk_guard": false, 00:18:31.063 "hdgst": false, 00:18:31.063 "ddgst": false, 00:18:31.063 "psk": "key0", 00:18:31.063 "allow_unrecognized_csi": false, 00:18:31.063 "method": "bdev_nvme_attach_controller", 00:18:31.063 "req_id": 1 00:18:31.063 } 00:18:31.063 Got JSON-RPC error response 00:18:31.063 response: 00:18:31.063 { 00:18:31.063 "code": -126, 00:18:31.063 "message": "Required key not available" 00:18:31.063 } 00:18:31.063 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72335 00:18:31.063 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72335 ']' 00:18:31.063 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72335 00:18:31.063 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:31.063 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:31.063 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72335 00:18:31.063 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:31.063 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:31.063 killing process with pid 72335 00:18:31.063 Received shutdown signal, test time was about 10.000000 seconds 00:18:31.063 00:18:31.063 Latency(us) 00:18:31.063 [2024-11-20T05:28:45.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.063 [2024-11-20T05:28:45.576Z] =================================================================================================================== 00:18:31.063 [2024-11-20T05:28:45.576Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:31.063 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72335' 00:18:31.063 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72335 00:18:31.064 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72335 00:18:31.064 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:31.064 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:31.064 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:31.064 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:31.064 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:31.064 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 72153 00:18:31.064 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72153 ']' 00:18:31.064 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72153 00:18:31.064 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:31.064 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:31.064 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72153 00:18:31.322 killing process with pid 72153 00:18:31.322 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:31.322 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:31.322 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72153' 00:18:31.322 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72153 00:18:31.322 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72153 00:18:31.322 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:31.322 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:31.322 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:31.322 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.322 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72361 00:18:31.322 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:31.322 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72361 00:18:31.322 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72361 ']' 00:18:31.322 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.323 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:31.323 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.323 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:31.323 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.323 [2024-11-20 05:28:45.802242] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:18:31.323 [2024-11-20 05:28:45.802332] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.582 [2024-11-20 05:28:45.952099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.582 [2024-11-20 05:28:45.992173] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.582 [2024-11-20 05:28:45.992245] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.582 [2024-11-20 05:28:45.992259] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.582 [2024-11-20 05:28:45.992270] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.582 [2024-11-20 05:28:45.992278] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.582 [2024-11-20 05:28:45.992641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.582 [2024-11-20 05:28:46.027543] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:31.841 05:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:31.841 05:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:31.841 05:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:31.841 05:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:31.841 05:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.841 05:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.841 05:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.vy9cFWYoW2 00:18:31.841 05:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:31.841 05:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.vy9cFWYoW2 00:18:31.841 05:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:18:31.841 05:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:31.841 05:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:18:31.841 05:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:31.841 05:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.vy9cFWYoW2 00:18:31.841 05:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.vy9cFWYoW2 00:18:31.841 05:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:32.099 [2024-11-20 05:28:46.407859] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:32.099 05:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:32.358 05:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:18:32.616 [2024-11-20 05:28:47.032001] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:32.616 [2024-11-20 05:28:47.032449] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:32.616 05:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:32.874 malloc0 00:18:32.874 05:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:33.440 05:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.vy9cFWYoW2 00:18:33.698 [2024-11-20 05:28:48.003185] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.vy9cFWYoW2': 0100666 00:18:33.698 [2024-11-20 05:28:48.003240] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:33.698 request: 00:18:33.698 { 00:18:33.698 "name": "key0", 00:18:33.698 "path": "/tmp/tmp.vy9cFWYoW2", 00:18:33.698 "method": "keyring_file_add_key", 00:18:33.698 "req_id": 1 00:18:33.698 } 00:18:33.698 Got JSON-RPC error response 00:18:33.698 response: 00:18:33.698 { 00:18:33.698 "code": -1, 00:18:33.698 "message": "Operation not permitted" 00:18:33.698 } 00:18:33.698 05:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:33.957 [2024-11-20 05:28:48.347290] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:33.957 [2024-11-20 05:28:48.347361] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:33.957 request: 00:18:33.957 { 00:18:33.957 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.957 "host": "nqn.2016-06.io.spdk:host1", 00:18:33.957 "psk": "key0", 00:18:33.957 "method": "nvmf_subsystem_add_host", 00:18:33.957 "req_id": 1 00:18:33.957 } 00:18:33.957 Got JSON-RPC error response 00:18:33.957 response: 00:18:33.957 { 00:18:33.957 "code": -32603, 00:18:33.957 "message": "Internal error" 00:18:33.957 } 00:18:33.957 05:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:33.957 05:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:33.957 05:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:33.957 05:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:33.957 05:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 72361 00:18:33.957 05:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72361 ']' 00:18:33.957 05:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72361 00:18:33.957 05:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:33.957 05:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:33.957 05:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72361 00:18:33.957 killing process with pid 72361 00:18:33.957 05:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:33.957 05:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:33.957 05:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72361' 00:18:33.957 05:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72361 00:18:33.957 05:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72361 00:18:34.215 05:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.vy9cFWYoW2 00:18:34.215 05:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:34.215 05:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:34.215 05:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:34.215 05:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.215 05:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72423 00:18:34.215 05:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72423 00:18:34.215 05:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:34.215 05:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72423 ']' 00:18:34.215 05:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.215 05:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:34.215 05:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.215 05:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:34.215 05:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.215 [2024-11-20 05:28:48.634376] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:18:34.215 [2024-11-20 05:28:48.634490] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.474 [2024-11-20 05:28:48.786031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.474 [2024-11-20 05:28:48.823365] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.474 [2024-11-20 05:28:48.823418] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.474 [2024-11-20 05:28:48.823432] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.474 [2024-11-20 05:28:48.823442] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.474 [2024-11-20 05:28:48.823451] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.474 [2024-11-20 05:28:48.823843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.474 [2024-11-20 05:28:48.856123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:35.408 05:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:35.408 05:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:35.408 05:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:35.408 05:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:35.408 05:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.408 05:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.408 05:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.vy9cFWYoW2 00:18:35.408 05:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.vy9cFWYoW2 00:18:35.408 05:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:35.667 [2024-11-20 05:28:50.028161] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:35.667 05:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:36.007 05:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:18:36.281 [2024-11-20 05:28:50.704298] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:36.281 [2024-11-20 05:28:50.704535] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:36.281 05:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:36.540 malloc0 00:18:36.540 05:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:37.108 05:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.vy9cFWYoW2 00:18:37.366 05:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:37.625 05:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:37.625 05:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=72484 00:18:37.625 05:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:37.625 05:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 72484 /var/tmp/bdevperf.sock 00:18:37.625 05:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72484 ']' 00:18:37.625 05:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:37.625 05:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:37.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:37.625 05:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:37.625 05:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:37.625 05:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.625 [2024-11-20 05:28:52.129499] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:18:37.625 [2024-11-20 05:28:52.129639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72484 ] 00:18:37.884 [2024-11-20 05:28:52.292213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.884 [2024-11-20 05:28:52.336542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:37.884 [2024-11-20 05:28:52.370388] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:38.143 05:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:38.143 05:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:38.143 05:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vy9cFWYoW2 00:18:38.401 05:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:38.659 [2024-11-20 05:28:53.069287] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:38.659 TLSTESTn1 00:18:38.659 05:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:18:39.227 05:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:39.227 "subsystems": [ 00:18:39.227 { 00:18:39.227 "subsystem": "keyring", 00:18:39.227 "config": [ 00:18:39.227 { 00:18:39.227 "method": "keyring_file_add_key", 00:18:39.227 "params": { 00:18:39.227 "name": "key0", 00:18:39.227 "path": "/tmp/tmp.vy9cFWYoW2" 00:18:39.227 } 00:18:39.227 } 00:18:39.227 ] 00:18:39.227 }, 00:18:39.227 { 00:18:39.227 "subsystem": "iobuf", 00:18:39.227 "config": [ 00:18:39.227 { 00:18:39.227 "method": "iobuf_set_options", 00:18:39.227 "params": { 00:18:39.227 "small_pool_count": 8192, 00:18:39.227 "large_pool_count": 1024, 00:18:39.227 "small_bufsize": 8192, 00:18:39.227 "large_bufsize": 135168, 00:18:39.227 "enable_numa": false 00:18:39.227 } 00:18:39.227 } 00:18:39.227 ] 00:18:39.227 }, 00:18:39.227 { 00:18:39.227 "subsystem": "sock", 00:18:39.227 "config": [ 00:18:39.227 { 00:18:39.227 "method": "sock_set_default_impl", 00:18:39.227 "params": { 00:18:39.227 "impl_name": "uring" 00:18:39.227 } 00:18:39.227 }, 00:18:39.227 { 00:18:39.227 "method": "sock_impl_set_options", 00:18:39.227 "params": { 00:18:39.227 "impl_name": "ssl", 00:18:39.227 "recv_buf_size": 4096, 00:18:39.227 "send_buf_size": 4096, 00:18:39.227 "enable_recv_pipe": true, 00:18:39.227 "enable_quickack": false, 00:18:39.227 "enable_placement_id": 0, 00:18:39.227 "enable_zerocopy_send_server": true, 00:18:39.227 "enable_zerocopy_send_client": false, 00:18:39.227 "zerocopy_threshold": 0, 00:18:39.227 "tls_version": 0, 00:18:39.227 "enable_ktls": false 00:18:39.227 } 00:18:39.227 }, 00:18:39.227 { 00:18:39.227 "method": "sock_impl_set_options", 00:18:39.227 "params": { 00:18:39.228 "impl_name": "posix", 00:18:39.228 "recv_buf_size": 2097152, 00:18:39.228 "send_buf_size": 2097152, 00:18:39.228 "enable_recv_pipe": true, 00:18:39.228 "enable_quickack": false, 00:18:39.228 "enable_placement_id": 0, 00:18:39.228 "enable_zerocopy_send_server": true, 00:18:39.228 "enable_zerocopy_send_client": false, 00:18:39.228 "zerocopy_threshold": 0, 00:18:39.228 "tls_version": 0, 00:18:39.228 "enable_ktls": false 00:18:39.228 } 00:18:39.228 }, 00:18:39.228 { 00:18:39.228 "method": "sock_impl_set_options", 00:18:39.228 "params": { 00:18:39.228 "impl_name": "uring", 00:18:39.228 "recv_buf_size": 2097152, 00:18:39.228 "send_buf_size": 2097152, 00:18:39.228 "enable_recv_pipe": true, 00:18:39.228 "enable_quickack": false, 00:18:39.228 "enable_placement_id": 0, 00:18:39.228 "enable_zerocopy_send_server": false, 00:18:39.228 "enable_zerocopy_send_client": false, 00:18:39.228 "zerocopy_threshold": 0, 00:18:39.228 "tls_version": 0, 00:18:39.228 "enable_ktls": false 00:18:39.228 } 00:18:39.228 } 00:18:39.228 ] 00:18:39.228 }, 00:18:39.228 { 00:18:39.228 "subsystem": "vmd", 00:18:39.228 "config": [] 00:18:39.228 }, 00:18:39.228 { 00:18:39.228 "subsystem": "accel", 00:18:39.228 "config": [ 00:18:39.228 { 00:18:39.228 "method": "accel_set_options", 00:18:39.228 "params": { 00:18:39.228 "small_cache_size": 128, 00:18:39.228 "large_cache_size": 16, 00:18:39.228 "task_count": 2048, 00:18:39.228 "sequence_count": 2048, 00:18:39.228 "buf_count": 2048 00:18:39.228 } 00:18:39.228 } 00:18:39.228 ] 00:18:39.228 }, 00:18:39.228 { 00:18:39.228 "subsystem": "bdev", 00:18:39.228 "config": [ 00:18:39.228 { 00:18:39.228 "method": "bdev_set_options", 00:18:39.228 "params": { 00:18:39.228 "bdev_io_pool_size": 65535, 00:18:39.228 "bdev_io_cache_size": 256, 00:18:39.228 "bdev_auto_examine": true, 00:18:39.228 "iobuf_small_cache_size": 128, 00:18:39.228 "iobuf_large_cache_size": 16 00:18:39.228 } 00:18:39.228 }, 00:18:39.228 { 00:18:39.228 "method": "bdev_raid_set_options", 00:18:39.228 "params": { 00:18:39.228 "process_window_size_kb": 1024, 00:18:39.228 "process_max_bandwidth_mb_sec": 0 00:18:39.228 } 00:18:39.228 }, 00:18:39.228 { 00:18:39.228 "method": "bdev_iscsi_set_options", 00:18:39.228 "params": { 00:18:39.228 "timeout_sec": 30 00:18:39.228 } 00:18:39.228 }, 00:18:39.228 { 00:18:39.228 "method": "bdev_nvme_set_options", 00:18:39.228 "params": { 00:18:39.228 "action_on_timeout": "none", 00:18:39.228 "timeout_us": 0, 00:18:39.228 "timeout_admin_us": 0, 00:18:39.228 "keep_alive_timeout_ms": 10000, 00:18:39.228 "arbitration_burst": 0, 00:18:39.228 "low_priority_weight": 0, 00:18:39.228 "medium_priority_weight": 0, 00:18:39.228 "high_priority_weight": 0, 00:18:39.228 "nvme_adminq_poll_period_us": 10000, 00:18:39.228 "nvme_ioq_poll_period_us": 0, 00:18:39.228 "io_queue_requests": 0, 00:18:39.228 "delay_cmd_submit": true, 00:18:39.228 "transport_retry_count": 4, 00:18:39.228 "bdev_retry_count": 3, 00:18:39.228 "transport_ack_timeout": 0, 00:18:39.228 "ctrlr_loss_timeout_sec": 0, 00:18:39.228 "reconnect_delay_sec": 0, 00:18:39.228 "fast_io_fail_timeout_sec": 0, 00:18:39.228 "disable_auto_failback": false, 00:18:39.228 "generate_uuids": false, 00:18:39.228 "transport_tos": 0, 00:18:39.228 "nvme_error_stat": false, 00:18:39.228 "rdma_srq_size": 0, 00:18:39.228 "io_path_stat": false, 00:18:39.228 "allow_accel_sequence": false, 00:18:39.228 "rdma_max_cq_size": 0, 00:18:39.228 "rdma_cm_event_timeout_ms": 0, 00:18:39.228 "dhchap_digests": [ 00:18:39.228 "sha256", 00:18:39.228 "sha384", 00:18:39.228 "sha512" 00:18:39.228 ], 00:18:39.228 "dhchap_dhgroups": [ 00:18:39.228 "null", 00:18:39.228 "ffdhe2048", 00:18:39.228 "ffdhe3072", 00:18:39.228 "ffdhe4096", 00:18:39.228 "ffdhe6144", 00:18:39.228 "ffdhe8192" 00:18:39.228 ] 00:18:39.228 } 00:18:39.228 }, 00:18:39.228 { 00:18:39.228 "method": "bdev_nvme_set_hotplug", 00:18:39.228 "params": { 00:18:39.228 "period_us": 100000, 00:18:39.228 "enable": false 00:18:39.228 } 00:18:39.228 }, 00:18:39.228 { 00:18:39.228 "method": "bdev_malloc_create", 00:18:39.228 "params": { 00:18:39.228 "name": "malloc0", 00:18:39.228 "num_blocks": 8192, 00:18:39.228 "block_size": 4096, 00:18:39.228 "physical_block_size": 4096, 00:18:39.228 "uuid": "33813d7b-9121-4369-9d6a-36fc099d76e0", 00:18:39.228 "optimal_io_boundary": 0, 00:18:39.228 "md_size": 0, 00:18:39.228 "dif_type": 0, 00:18:39.228 "dif_is_head_of_md": false, 00:18:39.228 "dif_pi_format": 0 00:18:39.228 } 00:18:39.228 }, 00:18:39.228 { 00:18:39.228 "method": "bdev_wait_for_examine" 00:18:39.228 } 00:18:39.228 ] 00:18:39.228 }, 00:18:39.228 { 00:18:39.228 "subsystem": "nbd", 00:18:39.228 "config": [] 00:18:39.228 }, 00:18:39.228 { 00:18:39.228 "subsystem": "scheduler", 00:18:39.228 "config": [ 00:18:39.228 { 00:18:39.228 "method": "framework_set_scheduler", 00:18:39.228 "params": { 00:18:39.228 "name": "static" 00:18:39.228 } 00:18:39.228 } 00:18:39.228 ] 00:18:39.228 }, 00:18:39.228 { 00:18:39.228 "subsystem": "nvmf", 00:18:39.228 "config": [ 00:18:39.228 { 00:18:39.228 "method": "nvmf_set_config", 00:18:39.228 "params": { 00:18:39.228 "discovery_filter": "match_any", 00:18:39.228 "admin_cmd_passthru": { 00:18:39.228 "identify_ctrlr": false 00:18:39.228 }, 00:18:39.228 "dhchap_digests": [ 00:18:39.228 "sha256", 00:18:39.228 "sha384", 00:18:39.228 "sha512" 00:18:39.228 ], 00:18:39.228 "dhchap_dhgroups": [ 00:18:39.228 "null", 00:18:39.228 "ffdhe2048", 00:18:39.228 "ffdhe3072", 00:18:39.228 "ffdhe4096", 00:18:39.228 "ffdhe6144", 00:18:39.228 "ffdhe8192" 00:18:39.228 ] 00:18:39.228 } 00:18:39.228 }, 00:18:39.228 { 00:18:39.228 "method": "nvmf_set_max_subsystems", 00:18:39.228 "params": { 00:18:39.228 "max_subsystems": 1024 00:18:39.228 } 00:18:39.228 }, 00:18:39.228 { 00:18:39.228 "method": "nvmf_set_crdt", 00:18:39.228 "params": { 00:18:39.228 "crdt1": 0, 00:18:39.228 "crdt2": 0, 00:18:39.228 "crdt3": 0 00:18:39.228 } 00:18:39.228 }, 00:18:39.228 { 00:18:39.228 "method": "nvmf_create_transport", 00:18:39.228 "params": { 00:18:39.228 "trtype": "TCP", 00:18:39.228 "max_queue_depth": 128, 00:18:39.228 "max_io_qpairs_per_ctrlr": 127, 00:18:39.228 "in_capsule_data_size": 4096, 00:18:39.228 "max_io_size": 131072, 00:18:39.228 "io_unit_size": 131072, 00:18:39.228 "max_aq_depth": 128, 00:18:39.228 "num_shared_buffers": 511, 00:18:39.228 "buf_cache_size": 4294967295, 00:18:39.228 "dif_insert_or_strip": false, 00:18:39.228 "zcopy": false, 00:18:39.228 "c2h_success": false, 00:18:39.228 "sock_priority": 0, 00:18:39.228 "abort_timeout_sec": 1, 00:18:39.228 "ack_timeout": 0, 00:18:39.228 "data_wr_pool_size": 0 00:18:39.228 } 00:18:39.228 }, 00:18:39.228 { 00:18:39.228 "method": "nvmf_create_subsystem", 00:18:39.228 "params": { 00:18:39.228 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.228 "allow_any_host": false, 00:18:39.228 "serial_number": "SPDK00000000000001", 00:18:39.228 "model_number": "SPDK bdev Controller", 00:18:39.228 "max_namespaces": 10, 00:18:39.228 "min_cntlid": 1, 00:18:39.228 "max_cntlid": 65519, 00:18:39.228 "ana_reporting": false 00:18:39.228 } 00:18:39.228 }, 00:18:39.228 { 00:18:39.228 "method": "nvmf_subsystem_add_host", 00:18:39.228 "params": { 00:18:39.228 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.229 "host": "nqn.2016-06.io.spdk:host1", 00:18:39.229 "psk": "key0" 00:18:39.229 } 00:18:39.229 }, 00:18:39.229 { 00:18:39.229 "method": "nvmf_subsystem_add_ns", 00:18:39.229 "params": { 00:18:39.229 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.229 "namespace": { 00:18:39.229 "nsid": 1, 00:18:39.229 "bdev_name": "malloc0", 00:18:39.229 "nguid": "33813D7B912143699D6A36FC099D76E0", 00:18:39.229 "uuid": "33813d7b-9121-4369-9d6a-36fc099d76e0", 00:18:39.229 "no_auto_visible": false 00:18:39.229 } 00:18:39.229 } 00:18:39.229 }, 00:18:39.229 { 00:18:39.229 "method": "nvmf_subsystem_add_listener", 00:18:39.229 "params": { 00:18:39.229 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.229 "listen_address": { 00:18:39.229 "trtype": "TCP", 00:18:39.229 "adrfam": "IPv4", 00:18:39.229 "traddr": "10.0.0.3", 00:18:39.229 "trsvcid": "4420" 00:18:39.229 }, 00:18:39.229 "secure_channel": true 00:18:39.229 } 00:18:39.229 } 00:18:39.229 ] 00:18:39.229 } 00:18:39.229 ] 00:18:39.229 }' 00:18:39.229 05:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:39.797 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:39.797 "subsystems": [ 00:18:39.797 { 00:18:39.797 "subsystem": "keyring", 00:18:39.797 "config": [ 00:18:39.797 { 00:18:39.797 "method": "keyring_file_add_key", 00:18:39.797 "params": { 00:18:39.797 "name": "key0", 00:18:39.797 "path": "/tmp/tmp.vy9cFWYoW2" 00:18:39.797 } 00:18:39.797 } 00:18:39.797 ] 00:18:39.797 }, 00:18:39.797 { 00:18:39.797 "subsystem": "iobuf", 00:18:39.797 "config": [ 00:18:39.797 { 00:18:39.797 "method": "iobuf_set_options", 00:18:39.797 "params": { 00:18:39.797 "small_pool_count": 8192, 00:18:39.797 "large_pool_count": 1024, 00:18:39.797 "small_bufsize": 8192, 00:18:39.797 "large_bufsize": 135168, 00:18:39.797 "enable_numa": false 00:18:39.797 } 00:18:39.797 } 00:18:39.797 ] 00:18:39.797 }, 00:18:39.797 { 00:18:39.797 "subsystem": "sock", 00:18:39.797 "config": [ 00:18:39.797 { 00:18:39.797 "method": "sock_set_default_impl", 00:18:39.797 "params": { 00:18:39.797 "impl_name": "uring" 00:18:39.797 } 00:18:39.797 }, 00:18:39.797 { 00:18:39.797 "method": "sock_impl_set_options", 00:18:39.797 "params": { 00:18:39.797 "impl_name": "ssl", 00:18:39.797 "recv_buf_size": 4096, 00:18:39.797 "send_buf_size": 4096, 00:18:39.797 "enable_recv_pipe": true, 00:18:39.797 "enable_quickack": false, 00:18:39.797 "enable_placement_id": 0, 00:18:39.797 "enable_zerocopy_send_server": true, 00:18:39.797 "enable_zerocopy_send_client": false, 00:18:39.797 "zerocopy_threshold": 0, 00:18:39.797 "tls_version": 0, 00:18:39.797 "enable_ktls": false 00:18:39.797 } 00:18:39.797 }, 00:18:39.797 { 00:18:39.797 "method": "sock_impl_set_options", 00:18:39.797 "params": { 00:18:39.797 "impl_name": "posix", 00:18:39.797 "recv_buf_size": 2097152, 00:18:39.797 "send_buf_size": 2097152, 00:18:39.797 "enable_recv_pipe": true, 00:18:39.797 "enable_quickack": false, 00:18:39.797 "enable_placement_id": 0, 00:18:39.797 "enable_zerocopy_send_server": true, 00:18:39.797 "enable_zerocopy_send_client": false, 00:18:39.797 "zerocopy_threshold": 0, 00:18:39.797 "tls_version": 0, 00:18:39.797 "enable_ktls": false 00:18:39.797 } 00:18:39.797 }, 00:18:39.797 { 00:18:39.797 "method": "sock_impl_set_options", 00:18:39.797 "params": { 00:18:39.797 "impl_name": "uring", 00:18:39.797 "recv_buf_size": 2097152, 00:18:39.797 "send_buf_size": 2097152, 00:18:39.797 "enable_recv_pipe": true, 00:18:39.797 "enable_quickack": false, 00:18:39.797 "enable_placement_id": 0, 00:18:39.797 "enable_zerocopy_send_server": false, 00:18:39.797 "enable_zerocopy_send_client": false, 00:18:39.797 "zerocopy_threshold": 0, 00:18:39.797 "tls_version": 0, 00:18:39.797 "enable_ktls": false 00:18:39.797 } 00:18:39.797 } 00:18:39.797 ] 00:18:39.797 }, 00:18:39.797 { 00:18:39.797 "subsystem": "vmd", 00:18:39.797 "config": [] 00:18:39.797 }, 00:18:39.797 { 00:18:39.797 "subsystem": "accel", 00:18:39.797 "config": [ 00:18:39.797 { 00:18:39.797 "method": "accel_set_options", 00:18:39.797 "params": { 00:18:39.797 "small_cache_size": 128, 00:18:39.797 "large_cache_size": 16, 00:18:39.797 "task_count": 2048, 00:18:39.797 "sequence_count": 2048, 00:18:39.797 "buf_count": 2048 00:18:39.797 } 00:18:39.797 } 00:18:39.797 ] 00:18:39.797 }, 00:18:39.797 { 00:18:39.798 "subsystem": "bdev", 00:18:39.798 "config": [ 00:18:39.798 { 00:18:39.798 "method": "bdev_set_options", 00:18:39.798 "params": { 00:18:39.798 "bdev_io_pool_size": 65535, 00:18:39.798 "bdev_io_cache_size": 256, 00:18:39.798 "bdev_auto_examine": true, 00:18:39.798 "iobuf_small_cache_size": 128, 00:18:39.798 "iobuf_large_cache_size": 16 00:18:39.798 } 00:18:39.798 }, 00:18:39.798 { 00:18:39.798 "method": "bdev_raid_set_options", 00:18:39.798 "params": { 00:18:39.798 "process_window_size_kb": 1024, 00:18:39.798 "process_max_bandwidth_mb_sec": 0 00:18:39.798 } 00:18:39.798 }, 00:18:39.798 { 00:18:39.798 "method": "bdev_iscsi_set_options", 00:18:39.798 "params": { 00:18:39.798 "timeout_sec": 30 00:18:39.798 } 00:18:39.798 }, 00:18:39.798 { 00:18:39.798 "method": "bdev_nvme_set_options", 00:18:39.798 "params": { 00:18:39.798 "action_on_timeout": "none", 00:18:39.798 "timeout_us": 0, 00:18:39.798 "timeout_admin_us": 0, 00:18:39.798 "keep_alive_timeout_ms": 10000, 00:18:39.798 "arbitration_burst": 0, 00:18:39.798 "low_priority_weight": 0, 00:18:39.798 "medium_priority_weight": 0, 00:18:39.798 "high_priority_weight": 0, 00:18:39.798 "nvme_adminq_poll_period_us": 10000, 00:18:39.798 "nvme_ioq_poll_period_us": 0, 00:18:39.798 "io_queue_requests": 512, 00:18:39.798 "delay_cmd_submit": true, 00:18:39.798 "transport_retry_count": 4, 00:18:39.798 "bdev_retry_count": 3, 00:18:39.798 "transport_ack_timeout": 0, 00:18:39.798 "ctrlr_loss_timeout_sec": 0, 00:18:39.798 "reconnect_delay_sec": 0, 00:18:39.798 "fast_io_fail_timeout_sec": 0, 00:18:39.798 "disable_auto_failback": false, 00:18:39.798 "generate_uuids": false, 00:18:39.798 "transport_tos": 0, 00:18:39.798 "nvme_error_stat": false, 00:18:39.798 "rdma_srq_size": 0, 00:18:39.798 "io_path_stat": false, 00:18:39.798 "allow_accel_sequence": false, 00:18:39.798 "rdma_max_cq_size": 0, 00:18:39.798 "rdma_cm_event_timeout_ms": 0, 00:18:39.798 "dhchap_digests": [ 00:18:39.798 "sha256", 00:18:39.798 "sha384", 00:18:39.798 "sha512" 00:18:39.798 ], 00:18:39.798 "dhchap_dhgroups": [ 00:18:39.798 "null", 00:18:39.798 "ffdhe2048", 00:18:39.798 "ffdhe3072", 00:18:39.798 "ffdhe4096", 00:18:39.798 "ffdhe6144", 00:18:39.798 "ffdhe8192" 00:18:39.798 ] 00:18:39.798 } 00:18:39.798 }, 00:18:39.798 { 00:18:39.798 "method": "bdev_nvme_attach_controller", 00:18:39.798 "params": { 00:18:39.798 "name": "TLSTEST", 00:18:39.798 "trtype": "TCP", 00:18:39.798 "adrfam": "IPv4", 00:18:39.798 "traddr": "10.0.0.3", 00:18:39.798 "trsvcid": "4420", 00:18:39.798 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.798 "prchk_reftag": false, 00:18:39.798 "prchk_guard": false, 00:18:39.798 "ctrlr_loss_timeout_sec": 0, 00:18:39.798 "reconnect_delay_sec": 0, 00:18:39.798 "fast_io_fail_timeout_sec": 0, 00:18:39.798 "psk": "key0", 00:18:39.798 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:39.798 "hdgst": false, 00:18:39.798 "ddgst": false, 00:18:39.798 "multipath": "multipath" 00:18:39.798 } 00:18:39.798 }, 00:18:39.798 { 00:18:39.798 "method": "bdev_nvme_set_hotplug", 00:18:39.798 "params": { 00:18:39.798 "period_us": 100000, 00:18:39.798 "enable": false 00:18:39.798 } 00:18:39.798 }, 00:18:39.798 { 00:18:39.798 "method": "bdev_wait_for_examine" 00:18:39.798 } 00:18:39.798 ] 00:18:39.798 }, 00:18:39.798 { 00:18:39.798 "subsystem": "nbd", 00:18:39.798 "config": [] 00:18:39.798 } 00:18:39.798 ] 00:18:39.798 }' 00:18:39.798 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 72484 00:18:39.798 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72484 ']' 00:18:39.798 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72484 00:18:39.798 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:39.798 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:39.798 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72484 00:18:39.798 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:39.798 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:39.798 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72484' 00:18:39.798 killing process with pid 72484 00:18:39.798 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72484 00:18:39.798 Received shutdown signal, test time was about 10.000000 seconds 00:18:39.798 00:18:39.798 Latency(us) 00:18:39.798 [2024-11-20T05:28:54.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.798 [2024-11-20T05:28:54.311Z] =================================================================================================================== 00:18:39.798 [2024-11-20T05:28:54.311Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:39.798 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72484 00:18:39.798 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 72423 00:18:39.798 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72423 ']' 00:18:39.798 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72423 00:18:39.798 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:39.798 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:39.798 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72423 00:18:39.798 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:39.798 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:39.798 killing process with pid 72423 00:18:39.798 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72423' 00:18:39.798 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72423 00:18:39.798 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72423 00:18:40.058 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:40.058 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:40.058 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:40.058 "subsystems": [ 00:18:40.058 { 00:18:40.058 "subsystem": "keyring", 00:18:40.058 "config": [ 00:18:40.058 { 00:18:40.058 "method": "keyring_file_add_key", 00:18:40.058 "params": { 00:18:40.058 "name": "key0", 00:18:40.058 "path": "/tmp/tmp.vy9cFWYoW2" 00:18:40.058 } 00:18:40.058 } 00:18:40.058 ] 00:18:40.058 }, 00:18:40.058 { 00:18:40.058 "subsystem": "iobuf", 00:18:40.058 "config": [ 00:18:40.058 { 00:18:40.058 "method": "iobuf_set_options", 00:18:40.058 "params": { 00:18:40.058 "small_pool_count": 8192, 00:18:40.058 "large_pool_count": 1024, 00:18:40.058 "small_bufsize": 8192, 00:18:40.058 "large_bufsize": 135168, 00:18:40.058 "enable_numa": false 00:18:40.058 } 00:18:40.058 } 00:18:40.058 ] 00:18:40.058 }, 00:18:40.058 { 00:18:40.058 "subsystem": "sock", 00:18:40.058 "config": [ 00:18:40.058 { 00:18:40.058 "method": "sock_set_default_impl", 00:18:40.058 "params": { 00:18:40.058 "impl_name": "uring" 00:18:40.058 } 00:18:40.058 }, 00:18:40.058 { 00:18:40.058 "method": "sock_impl_set_options", 00:18:40.058 "params": { 00:18:40.058 "impl_name": "ssl", 00:18:40.058 "recv_buf_size": 4096, 00:18:40.058 "send_buf_size": 4096, 00:18:40.058 "enable_recv_pipe": true, 00:18:40.058 "enable_quickack": false, 00:18:40.058 "enable_placement_id": 0, 00:18:40.058 "enable_zerocopy_send_server": true, 00:18:40.058 "enable_zerocopy_send_client": false, 00:18:40.058 "zerocopy_threshold": 0, 00:18:40.058 "tls_version": 0, 00:18:40.058 "enable_ktls": false 00:18:40.058 } 00:18:40.058 }, 00:18:40.058 { 00:18:40.058 "method": "sock_impl_set_options", 00:18:40.058 "params": { 00:18:40.058 "impl_name": "posix", 00:18:40.058 "recv_buf_size": 2097152, 00:18:40.058 "send_buf_size": 2097152, 00:18:40.058 "enable_recv_pipe": true, 00:18:40.059 "enable_quickack": false, 00:18:40.059 "enable_placement_id": 0, 00:18:40.059 "enable_zerocopy_send_server": true, 00:18:40.059 "enable_zerocopy_send_client": false, 00:18:40.059 "zerocopy_threshold": 0, 00:18:40.059 "tls_version": 0, 00:18:40.059 "enable_ktls": false 00:18:40.059 } 00:18:40.059 }, 00:18:40.059 { 00:18:40.059 "method": "sock_impl_set_options", 00:18:40.059 "params": { 00:18:40.059 "impl_name": "uring", 00:18:40.059 "recv_buf_size": 2097152, 00:18:40.059 "send_buf_size": 2097152, 00:18:40.059 "enable_recv_pipe": true, 00:18:40.059 "enable_quickack": false, 00:18:40.059 "enable_placement_id": 0, 00:18:40.059 "enable_zerocopy_send_server": false, 00:18:40.059 "enable_zerocopy_send_client": false, 00:18:40.059 "zerocopy_threshold": 0, 00:18:40.059 "tls_version": 0, 00:18:40.059 "enable_ktls": false 00:18:40.059 } 00:18:40.059 } 00:18:40.059 ] 00:18:40.059 }, 00:18:40.059 { 00:18:40.059 "subsystem": "vmd", 00:18:40.059 "config": [] 00:18:40.059 }, 00:18:40.059 { 00:18:40.059 "subsystem": "accel", 00:18:40.059 "config": [ 00:18:40.059 { 00:18:40.059 "method": "accel_set_options", 00:18:40.059 "params": { 00:18:40.059 "small_cache_size": 128, 00:18:40.059 "large_cache_size": 16, 00:18:40.059 "task_count": 2048, 00:18:40.059 "sequence_count": 2048, 00:18:40.059 "buf_count": 2048 00:18:40.059 } 00:18:40.059 } 00:18:40.059 ] 00:18:40.059 }, 00:18:40.059 { 00:18:40.059 "subsystem": "bdev", 00:18:40.059 "config": [ 00:18:40.059 { 00:18:40.059 "method": "bdev_set_options", 00:18:40.059 "params": { 00:18:40.059 "bdev_io_pool_size": 65535, 00:18:40.059 "bdev_io_cache_size": 256, 00:18:40.059 "bdev_auto_examine": true, 00:18:40.059 "iobuf_small_cache_size": 128, 00:18:40.059 "iobuf_large_cache_size": 16 00:18:40.059 } 00:18:40.059 }, 00:18:40.059 { 00:18:40.059 "method": "bdev_raid_set_options", 00:18:40.059 "params": { 00:18:40.059 "process_window_size_kb": 1024, 00:18:40.059 "process_max_bandwidth_mb_sec": 0 00:18:40.059 } 00:18:40.059 }, 00:18:40.059 { 00:18:40.059 "method": "bdev_iscsi_set_options", 00:18:40.059 "params": { 00:18:40.059 "timeout_sec": 30 00:18:40.059 } 00:18:40.059 }, 00:18:40.059 { 00:18:40.059 "method": "bdev_nvme_set_options", 00:18:40.059 "params": { 00:18:40.059 "action_on_timeout": "none", 00:18:40.059 "timeout_us": 0, 00:18:40.059 "timeout_admin_us": 0, 00:18:40.059 "keep_alive_timeout_ms": 10000, 00:18:40.059 "arbitration_burst": 0, 00:18:40.059 "low_priority_weight": 0, 00:18:40.059 "medium_priority_weight": 0, 00:18:40.059 "high_priority_weight": 0, 00:18:40.059 "nvme_adminq_poll_period_us": 10000, 00:18:40.059 "nvme_ioq_poll_period_us": 0, 00:18:40.059 "io_queue_requests": 0, 00:18:40.059 "delay_cmd_submit": true, 00:18:40.059 "transport_retry_count": 4, 00:18:40.059 "bdev_retry_count": 3, 00:18:40.059 "transport_ack_timeout": 0, 00:18:40.059 "ctrlr_loss_timeout_sec": 0, 00:18:40.059 "reconnect_delay_sec": 0, 00:18:40.059 "fast_io_fail_timeout_sec": 0, 00:18:40.059 "disable_auto_failback": false, 00:18:40.059 "generate_uuids": false, 00:18:40.059 "transport_tos": 0, 00:18:40.059 "nvme_error_stat": false, 00:18:40.059 "rdma_srq_size": 0, 00:18:40.059 "io_path_stat": false, 00:18:40.059 "allow_accel_sequence": false, 00:18:40.059 "rdma_max_cq_size": 0, 00:18:40.059 "rdma_cm_event_timeout_ms": 0, 00:18:40.059 "dhchap_digests": [ 00:18:40.059 "sha256", 00:18:40.059 "sha384", 00:18:40.059 "sha512" 00:18:40.059 ], 00:18:40.059 "dhchap_dhgroups": [ 00:18:40.059 "null", 00:18:40.059 "ffdhe2048", 00:18:40.059 "ffdhe3072", 00:18:40.059 "ffdhe4096", 00:18:40.059 "ffdhe6144", 00:18:40.059 "ffdhe8192" 00:18:40.059 ] 00:18:40.059 } 00:18:40.059 }, 00:18:40.059 { 00:18:40.059 "method": "bdev_nvme_set_hotplug", 00:18:40.059 "params": { 00:18:40.059 "period_us": 100000, 00:18:40.059 "enable": false 00:18:40.059 } 00:18:40.059 }, 00:18:40.059 { 00:18:40.059 "method": "bdev_malloc_create", 00:18:40.059 "params": { 00:18:40.059 "name": "malloc0", 00:18:40.059 "num_blocks": 8192, 00:18:40.059 "block_size": 4096, 00:18:40.059 "physical_block_size": 4096, 00:18:40.059 "uuid": "33813d7b-9121-4369-9d6a-36fc099d76e0", 00:18:40.059 "optimal_io_boundary": 0, 00:18:40.059 "md_size": 0, 00:18:40.059 "dif_type": 0, 00:18:40.059 "dif_is_head_of_md": false, 00:18:40.059 "dif_pi_format": 0 00:18:40.059 } 00:18:40.059 }, 00:18:40.059 { 00:18:40.059 "method": "bdev_wait_for_examine" 00:18:40.059 } 00:18:40.059 ] 00:18:40.059 }, 00:18:40.059 { 00:18:40.059 "subsystem": "nbd", 00:18:40.059 "config": [] 00:18:40.059 }, 00:18:40.059 { 00:18:40.059 "subsystem": "scheduler", 00:18:40.059 "config": [ 00:18:40.059 { 00:18:40.059 "method": "framework_set_scheduler", 00:18:40.059 "params": { 00:18:40.059 "name": "static" 00:18:40.059 } 00:18:40.059 } 00:18:40.059 ] 00:18:40.059 }, 00:18:40.059 { 00:18:40.059 "subsystem": "nvmf", 00:18:40.059 "config": [ 00:18:40.059 { 00:18:40.059 "method": "nvmf_set_config", 00:18:40.059 "params": { 00:18:40.059 "discovery_filter": "match_any", 00:18:40.059 "admin_cmd_passthru": { 00:18:40.059 "identify_ctrlr": false 00:18:40.059 }, 00:18:40.059 "dhchap_digests": [ 00:18:40.059 "sha256", 00:18:40.059 "sha384", 00:18:40.059 "sha512" 00:18:40.059 ], 00:18:40.059 "dhchap_dhgroups": [ 00:18:40.059 "null", 00:18:40.059 "ffdhe2048", 00:18:40.059 "ffdhe3072", 00:18:40.059 "ffdhe4096", 00:18:40.059 "ffdhe6144", 00:18:40.059 "ffdhe8192" 00:18:40.059 ] 00:18:40.059 } 00:18:40.059 }, 00:18:40.059 { 00:18:40.059 "method": "nvmf_set_max_subsystems", 00:18:40.059 "params": { 00:18:40.059 "max_subsystems": 1024 00:18:40.059 } 00:18:40.059 }, 00:18:40.059 { 00:18:40.059 "method": "nvmf_set_crdt", 00:18:40.059 "params": { 00:18:40.059 "crdt1": 0, 00:18:40.059 "crdt2": 0, 00:18:40.059 "crdt3": 0 00:18:40.059 } 00:18:40.059 }, 00:18:40.059 { 00:18:40.059 "method": "nvmf_create_transport", 00:18:40.059 "params": { 00:18:40.059 "trtype": "TCP", 00:18:40.059 "max_queue_depth": 128, 00:18:40.059 "max_io_qpairs_per_ctrlr": 127, 00:18:40.059 "in_capsule_data_size": 4096, 00:18:40.059 "max_io_size": 131072, 00:18:40.059 "io_unit_size": 131072, 00:18:40.059 "max_aq_depth": 128, 00:18:40.059 "num_shared_buffers": 511, 00:18:40.059 "buf_cache_size": 4294967295, 00:18:40.059 "dif_insert_or_strip": false, 00:18:40.059 "zcopy": false, 00:18:40.059 "c2h_success": false, 00:18:40.059 "sock_priority": 0, 00:18:40.059 "abort_timeout_sec": 1, 00:18:40.059 "ack_timeout": 0, 00:18:40.059 "data_wr_pool_size": 0 00:18:40.059 } 00:18:40.059 }, 00:18:40.059 { 00:18:40.059 "method": "nvmf_create_subsystem", 00:18:40.059 "params": { 00:18:40.059 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.059 "allow_any_host": false, 00:18:40.059 "serial_number": "SPDK00000000000001", 00:18:40.059 "model_number": "SPDK bdev Controller", 00:18:40.059 "max_namespaces": 10, 00:18:40.059 "min_cntlid": 1, 00:18:40.059 "max_cntlid": 65519, 00:18:40.059 "ana_reporting": false 00:18:40.059 } 00:18:40.059 }, 00:18:40.059 { 00:18:40.059 "method": "nvmf_subsystem_add_host", 00:18:40.059 "params": { 00:18:40.059 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.059 "host": "nqn.2016-06.io.spdk:host1", 00:18:40.059 "psk": "key0" 00:18:40.059 } 00:18:40.059 }, 00:18:40.059 { 00:18:40.059 "method": "nvmf_subsystem_add_ns", 00:18:40.060 "params": { 00:18:40.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.060 "namespace": { 00:18:40.060 "nsid": 1, 00:18:40.060 "bdev_name": "malloc0", 00:18:40.060 "nguid": "33813D7B912143699D6A36FC099D76E0", 00:18:40.060 "uuid": "33813d7b-9121-4369-9d6a-36fc099d76e0", 00:18:40.060 "no_auto_visible": false 00:18:40.060 } 00:18:40.060 } 00:18:40.060 }, 00:18:40.060 { 00:18:40.060 "method": "nvmf_subsystem_add_listener", 00:18:40.060 "params": { 00:18:40.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.060 "listen_address": { 00:18:40.060 "trtype": "TCP", 00:18:40.060 "adrfam": "IPv4", 00:18:40.060 "traddr": "10.0.0.3", 00:18:40.060 "trsvcid": "4420" 00:18:40.060 }, 00:18:40.060 "secure_channel": true 00:18:40.060 } 00:18:40.060 } 00:18:40.060 ] 00:18:40.060 } 00:18:40.060 ] 00:18:40.060 }' 00:18:40.060 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:40.060 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.060 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72526 00:18:40.060 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:40.060 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72526 00:18:40.060 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72526 ']' 00:18:40.060 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.060 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:40.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.060 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.060 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:40.060 05:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.060 [2024-11-20 05:28:54.463870] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:18:40.060 [2024-11-20 05:28:54.463974] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.319 [2024-11-20 05:28:54.610440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.319 [2024-11-20 05:28:54.643925] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:40.319 [2024-11-20 05:28:54.643985] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:40.320 [2024-11-20 05:28:54.643996] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:40.320 [2024-11-20 05:28:54.644004] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:40.320 [2024-11-20 05:28:54.644012] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:40.320 [2024-11-20 05:28:54.644378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.320 [2024-11-20 05:28:54.789867] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:40.579 [2024-11-20 05:28:54.850474] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:40.579 [2024-11-20 05:28:54.882432] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:40.579 [2024-11-20 05:28:54.882690] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:41.145 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:41.145 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:41.145 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:41.145 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:41.145 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.404 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.404 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72564 00:18:41.404 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72564 /var/tmp/bdevperf.sock 00:18:41.404 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72564 ']' 00:18:41.404 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:41.404 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:41.404 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:41.404 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:41.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:41.404 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:41.404 "subsystems": [ 00:18:41.404 { 00:18:41.404 "subsystem": "keyring", 00:18:41.404 "config": [ 00:18:41.404 { 00:18:41.404 "method": "keyring_file_add_key", 00:18:41.404 "params": { 00:18:41.404 "name": "key0", 00:18:41.404 "path": "/tmp/tmp.vy9cFWYoW2" 00:18:41.404 } 00:18:41.404 } 00:18:41.404 ] 00:18:41.404 }, 00:18:41.404 { 00:18:41.404 "subsystem": "iobuf", 00:18:41.404 "config": [ 00:18:41.404 { 00:18:41.405 "method": "iobuf_set_options", 00:18:41.405 "params": { 00:18:41.405 "small_pool_count": 8192, 00:18:41.405 "large_pool_count": 1024, 00:18:41.405 "small_bufsize": 8192, 00:18:41.405 "large_bufsize": 135168, 00:18:41.405 "enable_numa": false 00:18:41.405 } 00:18:41.405 } 00:18:41.405 ] 00:18:41.405 }, 00:18:41.405 { 00:18:41.405 "subsystem": "sock", 00:18:41.405 "config": [ 00:18:41.405 { 00:18:41.405 "method": "sock_set_default_impl", 00:18:41.405 "params": { 00:18:41.405 "impl_name": "uring" 00:18:41.405 } 00:18:41.405 }, 00:18:41.405 { 00:18:41.405 "method": "sock_impl_set_options", 00:18:41.405 "params": { 00:18:41.405 "impl_name": "ssl", 00:18:41.405 "recv_buf_size": 4096, 00:18:41.405 "send_buf_size": 4096, 00:18:41.405 "enable_recv_pipe": true, 00:18:41.405 "enable_quickack": false, 00:18:41.405 "enable_placement_id": 0, 00:18:41.405 "enable_zerocopy_send_server": true, 00:18:41.405 "enable_zerocopy_send_client": false, 00:18:41.405 "zerocopy_threshold": 0, 00:18:41.405 "tls_version": 0, 00:18:41.405 "enable_ktls": false 00:18:41.405 } 00:18:41.405 }, 00:18:41.405 { 00:18:41.405 "method": "sock_impl_set_options", 00:18:41.405 "params": { 00:18:41.405 "impl_name": "posix", 00:18:41.405 "recv_buf_size": 2097152, 00:18:41.405 "send_buf_size": 2097152, 00:18:41.405 "enable_recv_pipe": true, 00:18:41.405 "enable_quickack": false, 00:18:41.405 "enable_placement_id": 0, 00:18:41.405 "enable_zerocopy_send_server": true, 00:18:41.405 "enable_zerocopy_send_client": false, 00:18:41.405 "zerocopy_threshold": 0, 00:18:41.405 "tls_version": 0, 00:18:41.405 "enable_ktls": false 00:18:41.405 } 00:18:41.405 }, 00:18:41.405 { 00:18:41.405 "method": "sock_impl_set_options", 00:18:41.405 "params": { 00:18:41.405 "impl_name": "uring", 00:18:41.405 "recv_buf_size": 2097152, 00:18:41.405 "send_buf_size": 2097152, 00:18:41.405 "enable_recv_pipe": true, 00:18:41.405 "enable_quickack": false, 00:18:41.405 "enable_placement_id": 0, 00:18:41.405 "enable_zerocopy_send_server": false, 00:18:41.405 "enable_zerocopy_send_client": false, 00:18:41.405 "zerocopy_threshold": 0, 00:18:41.405 "tls_version": 0, 00:18:41.405 "enable_ktls": false 00:18:41.405 } 00:18:41.405 } 00:18:41.405 ] 00:18:41.405 }, 00:18:41.405 { 00:18:41.405 "subsystem": "vmd", 00:18:41.405 "config": [] 00:18:41.405 }, 00:18:41.405 { 00:18:41.405 "subsystem": "accel", 00:18:41.405 "config": [ 00:18:41.405 { 00:18:41.405 "method": "accel_set_options", 00:18:41.405 "params": { 00:18:41.405 "small_cache_size": 128, 00:18:41.405 "large_cache_size": 16, 00:18:41.405 "task_count": 2048, 00:18:41.405 "sequence_count": 2048, 00:18:41.405 "buf_count": 2048 00:18:41.405 } 00:18:41.405 } 00:18:41.405 ] 00:18:41.405 }, 00:18:41.405 { 00:18:41.405 "subsystem": "bdev", 00:18:41.405 "config": [ 00:18:41.405 { 00:18:41.405 "method": "bdev_set_options", 00:18:41.405 "params": { 00:18:41.405 "bdev_io_pool_size": 65535, 00:18:41.405 "bdev_io_cache_size": 256, 00:18:41.405 "bdev_auto_examine": true, 00:18:41.405 "iobuf_small_cache_size": 128, 00:18:41.405 "iobuf_large_cache_size": 16 00:18:41.405 } 00:18:41.405 }, 00:18:41.405 { 00:18:41.405 "method": "bdev_raid_set_options", 00:18:41.405 "params": { 00:18:41.405 "process_window_size_kb": 1024, 00:18:41.405 "process_max_bandwidth_mb_sec": 0 00:18:41.405 } 00:18:41.405 }, 00:18:41.405 { 00:18:41.405 "method": "bdev_iscsi_set_options", 00:18:41.405 "params": { 00:18:41.405 "timeout_sec": 30 00:18:41.405 } 00:18:41.405 }, 00:18:41.405 { 00:18:41.405 "method": "bdev_nvme_set_options", 00:18:41.405 "params": { 00:18:41.405 "action_on_timeout": "none", 00:18:41.405 "timeout_us": 0, 00:18:41.405 "timeout_admin_us": 0, 00:18:41.405 "keep_alive_timeout_ms": 10000, 00:18:41.405 "arbitration_burst": 0, 00:18:41.405 "low_priority_weight": 0, 00:18:41.405 "medium_priority_weight": 0, 00:18:41.405 "high_priority_weight": 0, 00:18:41.405 "nvme_adminq_poll_period_us": 10000, 00:18:41.405 "nvme_ioq_poll_period_us": 0, 00:18:41.405 "io_queue_requests": 512, 00:18:41.405 "delay_cmd_submit": true, 00:18:41.405 "transport_retry_count": 4, 00:18:41.405 "bdev_retry_count": 3, 00:18:41.405 "transport_ack_timeout": 0, 00:18:41.405 "ctrlr_loss_timeout_sec": 0, 00:18:41.405 "reconnect_delay_sec": 0, 00:18:41.405 "fast_io_fail_timeout_sec": 0, 00:18:41.405 "disable_auto_failback": false, 00:18:41.405 "generate_uuids": false, 00:18:41.405 "transport_tos": 0, 00:18:41.405 "nvme_error_stat": false, 00:18:41.405 "rdma_srq_size": 0, 00:18:41.405 "io_path_stat": false, 00:18:41.405 "allow_accel_sequence": false, 00:18:41.405 "rdma_max_cq_size": 0, 00:18:41.405 "rdma_cm_event_timeout_ms": 0, 00:18:41.405 "dhchap_digests": [ 00:18:41.405 "sha256", 00:18:41.405 "sha384", 00:18:41.405 "sha512" 00:18:41.405 ], 00:18:41.405 "dhchap_dhgroups": [ 00:18:41.405 "null", 00:18:41.405 "ffdhe2048", 00:18:41.405 "ffdhe3072", 00:18:41.405 "ffdhe4096", 00:18:41.405 "ffdhe6144", 00:18:41.405 "ffdhe8192" 00:18:41.405 ] 00:18:41.405 } 00:18:41.405 }, 00:18:41.405 { 00:18:41.405 "method": "bdev_nvme_attach_controller", 00:18:41.405 "params": { 00:18:41.405 "name": "TLSTEST", 00:18:41.405 "trtype": "TCP", 00:18:41.405 "adrfam": "IPv4", 00:18:41.405 "traddr": "10.0.0.3", 00:18:41.405 "trsvcid": "4420", 00:18:41.405 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.405 "prchk_reftag": false, 00:18:41.405 "prchk_guard": false, 00:18:41.405 "ctrlr_loss_timeout_sec": 0, 00:18:41.405 "reconnect_delay_sec": 0, 00:18:41.405 "fast_io_fail_timeout_sec": 0, 00:18:41.405 "psk": "key0", 00:18:41.405 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:41.405 "hdgst": false, 00:18:41.405 "ddgst": false, 00:18:41.405 "multipath": "multipath" 00:18:41.405 } 00:18:41.405 }, 00:18:41.405 { 00:18:41.405 "method": "bdev_nvme_set_hotplug", 00:18:41.405 "params": { 00:18:41.405 "period_us": 100000, 00:18:41.405 "enable": false 00:18:41.405 } 00:18:41.405 }, 00:18:41.405 { 00:18:41.405 "method": "bdev_wait_for_examine" 00:18:41.405 } 00:18:41.405 ] 00:18:41.405 }, 00:18:41.405 { 00:18:41.405 "subsystem": "nbd", 00:18:41.405 "config": [] 00:18:41.405 } 00:18:41.405 ] 00:18:41.405 }' 00:18:41.405 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:41.405 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.405 [2024-11-20 05:28:55.759585] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:18:41.406 [2024-11-20 05:28:55.759690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72564 ] 00:18:41.406 [2024-11-20 05:28:55.911743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.664 [2024-11-20 05:28:55.951834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.664 [2024-11-20 05:28:56.067895] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:41.664 [2024-11-20 05:28:56.103978] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:42.633 05:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:42.633 05:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:42.633 05:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:42.633 Running I/O for 10 seconds... 00:18:44.504 3489.00 IOPS, 13.63 MiB/s [2024-11-20T05:29:00.401Z] 3546.00 IOPS, 13.85 MiB/s [2024-11-20T05:29:01.337Z] 3572.33 IOPS, 13.95 MiB/s [2024-11-20T05:29:02.275Z] 3501.50 IOPS, 13.68 MiB/s [2024-11-20T05:29:03.210Z] 3541.60 IOPS, 13.83 MiB/s [2024-11-20T05:29:04.166Z] 3579.33 IOPS, 13.98 MiB/s [2024-11-20T05:29:05.097Z] 3611.43 IOPS, 14.11 MiB/s [2024-11-20T05:29:06.031Z] 3619.38 IOPS, 14.14 MiB/s [2024-11-20T05:29:07.048Z] 3648.56 IOPS, 14.25 MiB/s [2024-11-20T05:29:07.048Z] 3661.70 IOPS, 14.30 MiB/s 00:18:52.535 Latency(us) 00:18:52.535 [2024-11-20T05:29:07.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.535 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:52.535 Verification LBA range: start 0x0 length 0x2000 00:18:52.535 TLSTESTn1 : 10.03 3665.05 14.32 0.00 0.00 34848.44 7328.12 33363.78 00:18:52.535 [2024-11-20T05:29:07.048Z] =================================================================================================================== 00:18:52.535 [2024-11-20T05:29:07.048Z] Total : 3665.05 14.32 0.00 0.00 34848.44 7328.12 33363.78 00:18:52.535 { 00:18:52.535 "results": [ 00:18:52.535 { 00:18:52.535 "job": "TLSTESTn1", 00:18:52.535 "core_mask": "0x4", 00:18:52.535 "workload": "verify", 00:18:52.535 "status": "finished", 00:18:52.535 "verify_range": { 00:18:52.535 "start": 0, 00:18:52.535 "length": 8192 00:18:52.535 }, 00:18:52.535 "queue_depth": 128, 00:18:52.535 "io_size": 4096, 00:18:52.535 "runtime": 10.025499, 00:18:52.535 "iops": 3665.054477587599, 00:18:52.535 "mibps": 14.31661905307656, 00:18:52.535 "io_failed": 0, 00:18:52.535 "io_timeout": 0, 00:18:52.535 "avg_latency_us": 34848.44136096431, 00:18:52.535 "min_latency_us": 7328.1163636363635, 00:18:52.535 "max_latency_us": 33363.781818181815 00:18:52.535 } 00:18:52.535 ], 00:18:52.535 "core_count": 1 00:18:52.535 } 00:18:52.793 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:52.793 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72564 00:18:52.793 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72564 ']' 00:18:52.793 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72564 00:18:52.793 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:52.793 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:52.793 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72564 00:18:52.793 killing process with pid 72564 00:18:52.793 Received shutdown signal, test time was about 10.000000 seconds 00:18:52.793 00:18:52.793 Latency(us) 00:18:52.793 [2024-11-20T05:29:07.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.793 [2024-11-20T05:29:07.306Z] =================================================================================================================== 00:18:52.793 [2024-11-20T05:29:07.306Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:52.793 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:52.793 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:52.793 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72564' 00:18:52.793 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72564 00:18:52.793 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72564 00:18:52.793 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72526 00:18:52.793 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72526 ']' 00:18:52.793 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72526 00:18:52.793 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:52.793 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:52.793 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72526 00:18:52.793 killing process with pid 72526 00:18:52.793 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:52.793 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:52.793 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72526' 00:18:52.793 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72526 00:18:52.793 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72526 00:18:53.052 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:53.052 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:53.052 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:53.052 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.052 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:53.052 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72697 00:18:53.052 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72697 00:18:53.052 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72697 ']' 00:18:53.052 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.052 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:53.052 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.052 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:53.052 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.052 [2024-11-20 05:29:07.469513] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:18:53.052 [2024-11-20 05:29:07.469630] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.311 [2024-11-20 05:29:07.610080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.311 [2024-11-20 05:29:07.643808] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.311 [2024-11-20 05:29:07.643864] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.311 [2024-11-20 05:29:07.643877] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.311 [2024-11-20 05:29:07.643886] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.311 [2024-11-20 05:29:07.643893] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.311 [2024-11-20 05:29:07.644240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.311 [2024-11-20 05:29:07.673223] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:53.311 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:53.311 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:53.311 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:53.311 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:53.311 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.311 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.311 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.vy9cFWYoW2 00:18:53.311 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.vy9cFWYoW2 00:18:53.311 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:53.877 [2024-11-20 05:29:08.081001] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.877 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:54.135 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:18:54.394 [2024-11-20 05:29:08.761167] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:54.394 [2024-11-20 05:29:08.761380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:54.394 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:54.653 malloc0 00:18:54.653 05:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:55.217 05:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.vy9cFWYoW2 00:18:55.217 05:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:55.783 05:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:55.783 05:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72756 00:18:55.783 05:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:55.783 05:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72756 /var/tmp/bdevperf.sock 00:18:55.783 05:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72756 ']' 00:18:55.783 05:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:55.783 05:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:55.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:55.783 05:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:55.784 05:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:55.784 05:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.784 [2024-11-20 05:29:10.049600] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:18:55.784 [2024-11-20 05:29:10.049706] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72756 ] 00:18:55.784 [2024-11-20 05:29:10.198250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.784 [2024-11-20 05:29:10.246361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.784 [2024-11-20 05:29:10.279110] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:56.719 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:56.719 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:56.719 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vy9cFWYoW2 00:18:56.977 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:57.235 [2024-11-20 05:29:11.709190] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:57.494 nvme0n1 00:18:57.494 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:57.494 Running I/O for 1 seconds... 00:18:58.451 3968.00 IOPS, 15.50 MiB/s 00:18:58.451 Latency(us) 00:18:58.451 [2024-11-20T05:29:12.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.451 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:58.451 Verification LBA range: start 0x0 length 0x2000 00:18:58.451 nvme0n1 : 1.02 4030.71 15.74 0.00 0.00 31472.28 5928.03 25499.46 00:18:58.451 [2024-11-20T05:29:12.964Z] =================================================================================================================== 00:18:58.451 [2024-11-20T05:29:12.964Z] Total : 4030.71 15.74 0.00 0.00 31472.28 5928.03 25499.46 00:18:58.451 { 00:18:58.451 "results": [ 00:18:58.451 { 00:18:58.451 "job": "nvme0n1", 00:18:58.451 "core_mask": "0x2", 00:18:58.451 "workload": "verify", 00:18:58.451 "status": "finished", 00:18:58.451 "verify_range": { 00:18:58.451 "start": 0, 00:18:58.451 "length": 8192 00:18:58.451 }, 00:18:58.451 "queue_depth": 128, 00:18:58.451 "io_size": 4096, 00:18:58.451 "runtime": 1.016199, 00:18:58.451 "iops": 4030.706584045054, 00:18:58.451 "mibps": 15.744947593925993, 00:18:58.451 "io_failed": 0, 00:18:58.451 "io_timeout": 0, 00:18:58.451 "avg_latency_us": 31472.276363636363, 00:18:58.451 "min_latency_us": 5928.029090909091, 00:18:58.451 "max_latency_us": 25499.46181818182 00:18:58.451 } 00:18:58.452 ], 00:18:58.452 "core_count": 1 00:18:58.452 } 00:18:58.711 05:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72756 00:18:58.711 05:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72756 ']' 00:18:58.711 05:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72756 00:18:58.711 05:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:58.711 05:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:58.711 05:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72756 00:18:58.711 killing process with pid 72756 00:18:58.711 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:58.711 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:58.711 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72756' 00:18:58.711 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72756 00:18:58.711 Received shutdown signal, test time was about 1.000000 seconds 00:18:58.711 00:18:58.711 Latency(us) 00:18:58.711 [2024-11-20T05:29:13.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.711 [2024-11-20T05:29:13.224Z] =================================================================================================================== 00:18:58.711 [2024-11-20T05:29:13.224Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:58.711 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72756 00:18:58.711 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72697 00:18:58.711 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72697 ']' 00:18:58.711 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72697 00:18:58.711 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:58.711 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:58.711 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72697 00:18:58.711 killing process with pid 72697 00:18:58.711 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:58.711 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:58.711 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72697' 00:18:58.711 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72697 00:18:58.711 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72697 00:18:58.969 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:58.969 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:58.969 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:58.969 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.969 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72807 00:18:58.969 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:58.969 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72807 00:18:58.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.969 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72807 ']' 00:18:58.969 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.969 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:58.969 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.969 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:58.969 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.969 [2024-11-20 05:29:13.383839] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:18:58.969 [2024-11-20 05:29:13.384132] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.228 [2024-11-20 05:29:13.536809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.228 [2024-11-20 05:29:13.574917] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.228 [2024-11-20 05:29:13.575201] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.228 [2024-11-20 05:29:13.575226] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:59.228 [2024-11-20 05:29:13.575239] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:59.228 [2024-11-20 05:29:13.575248] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.228 [2024-11-20 05:29:13.575622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.228 [2024-11-20 05:29:13.609430] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:59.228 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:59.228 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:59.228 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:59.228 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:59.228 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.228 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:59.228 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:59.228 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.228 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.487 [2024-11-20 05:29:13.740882] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:59.487 malloc0 00:18:59.487 [2024-11-20 05:29:13.769613] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:59.487 [2024-11-20 05:29:13.770046] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:59.487 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.487 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72827 00:18:59.487 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:59.487 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72827 /var/tmp/bdevperf.sock 00:18:59.487 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72827 ']' 00:18:59.487 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:59.487 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:59.487 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:59.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:59.487 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:59.487 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.487 [2024-11-20 05:29:13.858420] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:18:59.487 [2024-11-20 05:29:13.858724] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72827 ] 00:18:59.745 [2024-11-20 05:29:14.012562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.745 [2024-11-20 05:29:14.053335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.745 [2024-11-20 05:29:14.087093] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:59.745 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:59.745 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:59.745 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vy9cFWYoW2 00:19:00.313 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:00.313 [2024-11-20 05:29:14.762107] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:00.571 nvme0n1 00:19:00.571 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:00.571 Running I/O for 1 seconds... 00:19:01.505 3696.00 IOPS, 14.44 MiB/s 00:19:01.505 Latency(us) 00:19:01.505 [2024-11-20T05:29:16.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.505 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:01.505 Verification LBA range: start 0x0 length 0x2000 00:19:01.505 nvme0n1 : 1.02 3740.75 14.61 0.00 0.00 33845.05 8281.37 26095.24 00:19:01.505 [2024-11-20T05:29:16.018Z] =================================================================================================================== 00:19:01.505 [2024-11-20T05:29:16.018Z] Total : 3740.75 14.61 0.00 0.00 33845.05 8281.37 26095.24 00:19:01.505 { 00:19:01.505 "results": [ 00:19:01.505 { 00:19:01.505 "job": "nvme0n1", 00:19:01.505 "core_mask": "0x2", 00:19:01.505 "workload": "verify", 00:19:01.505 "status": "finished", 00:19:01.505 "verify_range": { 00:19:01.505 "start": 0, 00:19:01.505 "length": 8192 00:19:01.505 }, 00:19:01.505 "queue_depth": 128, 00:19:01.505 "io_size": 4096, 00:19:01.505 "runtime": 1.022256, 00:19:01.505 "iops": 3740.7459579596502, 00:19:01.505 "mibps": 14.612288898279884, 00:19:01.505 "io_failed": 0, 00:19:01.505 "io_timeout": 0, 00:19:01.505 "avg_latency_us": 33845.05013313047, 00:19:01.505 "min_latency_us": 8281.367272727273, 00:19:01.505 "max_latency_us": 26095.243636363637 00:19:01.505 } 00:19:01.505 ], 00:19:01.505 "core_count": 1 00:19:01.505 } 00:19:01.764 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:01.764 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.764 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.764 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.764 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:01.764 "subsystems": [ 00:19:01.764 { 00:19:01.764 "subsystem": "keyring", 00:19:01.764 "config": [ 00:19:01.764 { 00:19:01.764 "method": "keyring_file_add_key", 00:19:01.764 "params": { 00:19:01.764 "name": "key0", 00:19:01.764 "path": "/tmp/tmp.vy9cFWYoW2" 00:19:01.764 } 00:19:01.764 } 00:19:01.764 ] 00:19:01.764 }, 00:19:01.764 { 00:19:01.764 "subsystem": "iobuf", 00:19:01.764 "config": [ 00:19:01.764 { 00:19:01.764 "method": "iobuf_set_options", 00:19:01.764 "params": { 00:19:01.764 "small_pool_count": 8192, 00:19:01.764 "large_pool_count": 1024, 00:19:01.764 "small_bufsize": 8192, 00:19:01.764 "large_bufsize": 135168, 00:19:01.764 "enable_numa": false 00:19:01.764 } 00:19:01.764 } 00:19:01.764 ] 00:19:01.764 }, 00:19:01.764 { 00:19:01.764 "subsystem": "sock", 00:19:01.764 "config": [ 00:19:01.764 { 00:19:01.764 "method": "sock_set_default_impl", 00:19:01.764 "params": { 00:19:01.764 "impl_name": "uring" 00:19:01.764 } 00:19:01.764 }, 00:19:01.764 { 00:19:01.764 "method": "sock_impl_set_options", 00:19:01.764 "params": { 00:19:01.764 "impl_name": "ssl", 00:19:01.764 "recv_buf_size": 4096, 00:19:01.764 "send_buf_size": 4096, 00:19:01.764 "enable_recv_pipe": true, 00:19:01.764 "enable_quickack": false, 00:19:01.764 "enable_placement_id": 0, 00:19:01.764 "enable_zerocopy_send_server": true, 00:19:01.764 "enable_zerocopy_send_client": false, 00:19:01.764 "zerocopy_threshold": 0, 00:19:01.764 "tls_version": 0, 00:19:01.764 "enable_ktls": false 00:19:01.764 } 00:19:01.764 }, 00:19:01.764 { 00:19:01.764 "method": "sock_impl_set_options", 00:19:01.764 "params": { 00:19:01.764 "impl_name": "posix", 00:19:01.764 "recv_buf_size": 2097152, 00:19:01.764 "send_buf_size": 2097152, 00:19:01.764 "enable_recv_pipe": true, 00:19:01.764 "enable_quickack": false, 00:19:01.764 "enable_placement_id": 0, 00:19:01.764 "enable_zerocopy_send_server": true, 00:19:01.764 "enable_zerocopy_send_client": false, 00:19:01.764 "zerocopy_threshold": 0, 00:19:01.764 "tls_version": 0, 00:19:01.764 "enable_ktls": false 00:19:01.764 } 00:19:01.764 }, 00:19:01.764 { 00:19:01.764 "method": "sock_impl_set_options", 00:19:01.764 "params": { 00:19:01.764 "impl_name": "uring", 00:19:01.764 "recv_buf_size": 2097152, 00:19:01.764 "send_buf_size": 2097152, 00:19:01.764 "enable_recv_pipe": true, 00:19:01.764 "enable_quickack": false, 00:19:01.764 "enable_placement_id": 0, 00:19:01.764 "enable_zerocopy_send_server": false, 00:19:01.764 "enable_zerocopy_send_client": false, 00:19:01.764 "zerocopy_threshold": 0, 00:19:01.764 "tls_version": 0, 00:19:01.764 "enable_ktls": false 00:19:01.764 } 00:19:01.764 } 00:19:01.764 ] 00:19:01.764 }, 00:19:01.764 { 00:19:01.764 "subsystem": "vmd", 00:19:01.764 "config": [] 00:19:01.764 }, 00:19:01.764 { 00:19:01.764 "subsystem": "accel", 00:19:01.764 "config": [ 00:19:01.764 { 00:19:01.764 "method": "accel_set_options", 00:19:01.764 "params": { 00:19:01.764 "small_cache_size": 128, 00:19:01.764 "large_cache_size": 16, 00:19:01.764 "task_count": 2048, 00:19:01.764 "sequence_count": 2048, 00:19:01.764 "buf_count": 2048 00:19:01.764 } 00:19:01.764 } 00:19:01.764 ] 00:19:01.764 }, 00:19:01.764 { 00:19:01.764 "subsystem": "bdev", 00:19:01.764 "config": [ 00:19:01.764 { 00:19:01.764 "method": "bdev_set_options", 00:19:01.764 "params": { 00:19:01.764 "bdev_io_pool_size": 65535, 00:19:01.764 "bdev_io_cache_size": 256, 00:19:01.764 "bdev_auto_examine": true, 00:19:01.764 "iobuf_small_cache_size": 128, 00:19:01.764 "iobuf_large_cache_size": 16 00:19:01.764 } 00:19:01.764 }, 00:19:01.764 { 00:19:01.764 "method": "bdev_raid_set_options", 00:19:01.764 "params": { 00:19:01.764 "process_window_size_kb": 1024, 00:19:01.764 "process_max_bandwidth_mb_sec": 0 00:19:01.764 } 00:19:01.764 }, 00:19:01.764 { 00:19:01.764 "method": "bdev_iscsi_set_options", 00:19:01.764 "params": { 00:19:01.764 "timeout_sec": 30 00:19:01.764 } 00:19:01.764 }, 00:19:01.764 { 00:19:01.764 "method": "bdev_nvme_set_options", 00:19:01.764 "params": { 00:19:01.764 "action_on_timeout": "none", 00:19:01.764 "timeout_us": 0, 00:19:01.764 "timeout_admin_us": 0, 00:19:01.764 "keep_alive_timeout_ms": 10000, 00:19:01.764 "arbitration_burst": 0, 00:19:01.764 "low_priority_weight": 0, 00:19:01.764 "medium_priority_weight": 0, 00:19:01.764 "high_priority_weight": 0, 00:19:01.764 "nvme_adminq_poll_period_us": 10000, 00:19:01.764 "nvme_ioq_poll_period_us": 0, 00:19:01.764 "io_queue_requests": 0, 00:19:01.764 "delay_cmd_submit": true, 00:19:01.764 "transport_retry_count": 4, 00:19:01.764 "bdev_retry_count": 3, 00:19:01.764 "transport_ack_timeout": 0, 00:19:01.764 "ctrlr_loss_timeout_sec": 0, 00:19:01.764 "reconnect_delay_sec": 0, 00:19:01.764 "fast_io_fail_timeout_sec": 0, 00:19:01.764 "disable_auto_failback": false, 00:19:01.764 "generate_uuids": false, 00:19:01.764 "transport_tos": 0, 00:19:01.764 "nvme_error_stat": false, 00:19:01.764 "rdma_srq_size": 0, 00:19:01.764 "io_path_stat": false, 00:19:01.764 "allow_accel_sequence": false, 00:19:01.764 "rdma_max_cq_size": 0, 00:19:01.764 "rdma_cm_event_timeout_ms": 0, 00:19:01.764 "dhchap_digests": [ 00:19:01.764 "sha256", 00:19:01.764 "sha384", 00:19:01.764 "sha512" 00:19:01.764 ], 00:19:01.764 "dhchap_dhgroups": [ 00:19:01.764 "null", 00:19:01.764 "ffdhe2048", 00:19:01.764 "ffdhe3072", 00:19:01.764 "ffdhe4096", 00:19:01.764 "ffdhe6144", 00:19:01.764 "ffdhe8192" 00:19:01.764 ] 00:19:01.764 } 00:19:01.764 }, 00:19:01.764 { 00:19:01.764 "method": "bdev_nvme_set_hotplug", 00:19:01.764 "params": { 00:19:01.764 "period_us": 100000, 00:19:01.764 "enable": false 00:19:01.764 } 00:19:01.764 }, 00:19:01.764 { 00:19:01.764 "method": "bdev_malloc_create", 00:19:01.764 "params": { 00:19:01.764 "name": "malloc0", 00:19:01.764 "num_blocks": 8192, 00:19:01.764 "block_size": 4096, 00:19:01.764 "physical_block_size": 4096, 00:19:01.764 "uuid": "2fd35515-731e-4ee9-93fa-435a6e039f37", 00:19:01.764 "optimal_io_boundary": 0, 00:19:01.764 "md_size": 0, 00:19:01.764 "dif_type": 0, 00:19:01.764 "dif_is_head_of_md": false, 00:19:01.764 "dif_pi_format": 0 00:19:01.764 } 00:19:01.764 }, 00:19:01.764 { 00:19:01.764 "method": "bdev_wait_for_examine" 00:19:01.764 } 00:19:01.764 ] 00:19:01.764 }, 00:19:01.764 { 00:19:01.764 "subsystem": "nbd", 00:19:01.764 "config": [] 00:19:01.764 }, 00:19:01.764 { 00:19:01.764 "subsystem": "scheduler", 00:19:01.764 "config": [ 00:19:01.764 { 00:19:01.764 "method": "framework_set_scheduler", 00:19:01.764 "params": { 00:19:01.764 "name": "static" 00:19:01.764 } 00:19:01.764 } 00:19:01.764 ] 00:19:01.764 }, 00:19:01.764 { 00:19:01.764 "subsystem": "nvmf", 00:19:01.764 "config": [ 00:19:01.764 { 00:19:01.764 "method": "nvmf_set_config", 00:19:01.764 "params": { 00:19:01.764 "discovery_filter": "match_any", 00:19:01.764 "admin_cmd_passthru": { 00:19:01.764 "identify_ctrlr": false 00:19:01.764 }, 00:19:01.764 "dhchap_digests": [ 00:19:01.764 "sha256", 00:19:01.764 "sha384", 00:19:01.764 "sha512" 00:19:01.764 ], 00:19:01.764 "dhchap_dhgroups": [ 00:19:01.764 "null", 00:19:01.764 "ffdhe2048", 00:19:01.764 "ffdhe3072", 00:19:01.764 "ffdhe4096", 00:19:01.765 "ffdhe6144", 00:19:01.765 "ffdhe8192" 00:19:01.765 ] 00:19:01.765 } 00:19:01.765 }, 00:19:01.765 { 00:19:01.765 "method": "nvmf_set_max_subsystems", 00:19:01.765 "params": { 00:19:01.765 "max_subsystems": 1024 00:19:01.765 } 00:19:01.765 }, 00:19:01.765 { 00:19:01.765 "method": "nvmf_set_crdt", 00:19:01.765 "params": { 00:19:01.765 "crdt1": 0, 00:19:01.765 "crdt2": 0, 00:19:01.765 "crdt3": 0 00:19:01.765 } 00:19:01.765 }, 00:19:01.765 { 00:19:01.765 "method": "nvmf_create_transport", 00:19:01.765 "params": { 00:19:01.765 "trtype": "TCP", 00:19:01.765 "max_queue_depth": 128, 00:19:01.765 "max_io_qpairs_per_ctrlr": 127, 00:19:01.765 "in_capsule_data_size": 4096, 00:19:01.765 "max_io_size": 131072, 00:19:01.765 "io_unit_size": 131072, 00:19:01.765 "max_aq_depth": 128, 00:19:01.765 "num_shared_buffers": 511, 00:19:01.765 "buf_cache_size": 4294967295, 00:19:01.765 "dif_insert_or_strip": false, 00:19:01.765 "zcopy": false, 00:19:01.765 "c2h_success": false, 00:19:01.765 "sock_priority": 0, 00:19:01.765 "abort_timeout_sec": 1, 00:19:01.765 "ack_timeout": 0, 00:19:01.765 "data_wr_pool_size": 0 00:19:01.765 } 00:19:01.765 }, 00:19:01.765 { 00:19:01.765 "method": "nvmf_create_subsystem", 00:19:01.765 "params": { 00:19:01.765 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.765 "allow_any_host": false, 00:19:01.765 "serial_number": "00000000000000000000", 00:19:01.765 "model_number": "SPDK bdev Controller", 00:19:01.765 "max_namespaces": 32, 00:19:01.765 "min_cntlid": 1, 00:19:01.765 "max_cntlid": 65519, 00:19:01.765 "ana_reporting": false 00:19:01.765 } 00:19:01.765 }, 00:19:01.765 { 00:19:01.765 "method": "nvmf_subsystem_add_host", 00:19:01.765 "params": { 00:19:01.765 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.765 "host": "nqn.2016-06.io.spdk:host1", 00:19:01.765 "psk": "key0" 00:19:01.765 } 00:19:01.765 }, 00:19:01.765 { 00:19:01.765 "method": "nvmf_subsystem_add_ns", 00:19:01.765 "params": { 00:19:01.765 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.765 "namespace": { 00:19:01.765 "nsid": 1, 00:19:01.765 "bdev_name": "malloc0", 00:19:01.765 "nguid": "2FD35515731E4EE993FA435A6E039F37", 00:19:01.765 "uuid": "2fd35515-731e-4ee9-93fa-435a6e039f37", 00:19:01.765 "no_auto_visible": false 00:19:01.765 } 00:19:01.765 } 00:19:01.765 }, 00:19:01.765 { 00:19:01.765 "method": "nvmf_subsystem_add_listener", 00:19:01.765 "params": { 00:19:01.765 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.765 "listen_address": { 00:19:01.765 "trtype": "TCP", 00:19:01.765 "adrfam": "IPv4", 00:19:01.765 "traddr": "10.0.0.3", 00:19:01.765 "trsvcid": "4420" 00:19:01.765 }, 00:19:01.765 "secure_channel": false, 00:19:01.765 "sock_impl": "ssl" 00:19:01.765 } 00:19:01.765 } 00:19:01.765 ] 00:19:01.765 } 00:19:01.765 ] 00:19:01.765 }' 00:19:01.765 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:02.331 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:02.331 "subsystems": [ 00:19:02.331 { 00:19:02.331 "subsystem": "keyring", 00:19:02.331 "config": [ 00:19:02.331 { 00:19:02.331 "method": "keyring_file_add_key", 00:19:02.331 "params": { 00:19:02.331 "name": "key0", 00:19:02.331 "path": "/tmp/tmp.vy9cFWYoW2" 00:19:02.331 } 00:19:02.331 } 00:19:02.331 ] 00:19:02.331 }, 00:19:02.331 { 00:19:02.331 "subsystem": "iobuf", 00:19:02.331 "config": [ 00:19:02.331 { 00:19:02.331 "method": "iobuf_set_options", 00:19:02.331 "params": { 00:19:02.331 "small_pool_count": 8192, 00:19:02.331 "large_pool_count": 1024, 00:19:02.331 "small_bufsize": 8192, 00:19:02.331 "large_bufsize": 135168, 00:19:02.331 "enable_numa": false 00:19:02.331 } 00:19:02.331 } 00:19:02.331 ] 00:19:02.331 }, 00:19:02.331 { 00:19:02.331 "subsystem": "sock", 00:19:02.331 "config": [ 00:19:02.331 { 00:19:02.331 "method": "sock_set_default_impl", 00:19:02.331 "params": { 00:19:02.331 "impl_name": "uring" 00:19:02.331 } 00:19:02.331 }, 00:19:02.331 { 00:19:02.331 "method": "sock_impl_set_options", 00:19:02.331 "params": { 00:19:02.331 "impl_name": "ssl", 00:19:02.331 "recv_buf_size": 4096, 00:19:02.331 "send_buf_size": 4096, 00:19:02.331 "enable_recv_pipe": true, 00:19:02.331 "enable_quickack": false, 00:19:02.331 "enable_placement_id": 0, 00:19:02.331 "enable_zerocopy_send_server": true, 00:19:02.331 "enable_zerocopy_send_client": false, 00:19:02.331 "zerocopy_threshold": 0, 00:19:02.331 "tls_version": 0, 00:19:02.331 "enable_ktls": false 00:19:02.331 } 00:19:02.331 }, 00:19:02.331 { 00:19:02.331 "method": "sock_impl_set_options", 00:19:02.331 "params": { 00:19:02.331 "impl_name": "posix", 00:19:02.331 "recv_buf_size": 2097152, 00:19:02.331 "send_buf_size": 2097152, 00:19:02.331 "enable_recv_pipe": true, 00:19:02.331 "enable_quickack": false, 00:19:02.331 "enable_placement_id": 0, 00:19:02.331 "enable_zerocopy_send_server": true, 00:19:02.331 "enable_zerocopy_send_client": false, 00:19:02.331 "zerocopy_threshold": 0, 00:19:02.331 "tls_version": 0, 00:19:02.331 "enable_ktls": false 00:19:02.331 } 00:19:02.331 }, 00:19:02.331 { 00:19:02.331 "method": "sock_impl_set_options", 00:19:02.331 "params": { 00:19:02.331 "impl_name": "uring", 00:19:02.331 "recv_buf_size": 2097152, 00:19:02.331 "send_buf_size": 2097152, 00:19:02.331 "enable_recv_pipe": true, 00:19:02.331 "enable_quickack": false, 00:19:02.331 "enable_placement_id": 0, 00:19:02.331 "enable_zerocopy_send_server": false, 00:19:02.331 "enable_zerocopy_send_client": false, 00:19:02.331 "zerocopy_threshold": 0, 00:19:02.331 "tls_version": 0, 00:19:02.331 "enable_ktls": false 00:19:02.331 } 00:19:02.331 } 00:19:02.331 ] 00:19:02.331 }, 00:19:02.331 { 00:19:02.331 "subsystem": "vmd", 00:19:02.331 "config": [] 00:19:02.331 }, 00:19:02.331 { 00:19:02.331 "subsystem": "accel", 00:19:02.331 "config": [ 00:19:02.331 { 00:19:02.331 "method": "accel_set_options", 00:19:02.331 "params": { 00:19:02.331 "small_cache_size": 128, 00:19:02.331 "large_cache_size": 16, 00:19:02.331 "task_count": 2048, 00:19:02.331 "sequence_count": 2048, 00:19:02.331 "buf_count": 2048 00:19:02.331 } 00:19:02.331 } 00:19:02.331 ] 00:19:02.331 }, 00:19:02.331 { 00:19:02.331 "subsystem": "bdev", 00:19:02.331 "config": [ 00:19:02.331 { 00:19:02.332 "method": "bdev_set_options", 00:19:02.332 "params": { 00:19:02.332 "bdev_io_pool_size": 65535, 00:19:02.332 "bdev_io_cache_size": 256, 00:19:02.332 "bdev_auto_examine": true, 00:19:02.332 "iobuf_small_cache_size": 128, 00:19:02.332 "iobuf_large_cache_size": 16 00:19:02.332 } 00:19:02.332 }, 00:19:02.332 { 00:19:02.332 "method": "bdev_raid_set_options", 00:19:02.332 "params": { 00:19:02.332 "process_window_size_kb": 1024, 00:19:02.332 "process_max_bandwidth_mb_sec": 0 00:19:02.332 } 00:19:02.332 }, 00:19:02.332 { 00:19:02.332 "method": "bdev_iscsi_set_options", 00:19:02.332 "params": { 00:19:02.332 "timeout_sec": 30 00:19:02.332 } 00:19:02.332 }, 00:19:02.332 { 00:19:02.332 "method": "bdev_nvme_set_options", 00:19:02.332 "params": { 00:19:02.332 "action_on_timeout": "none", 00:19:02.332 "timeout_us": 0, 00:19:02.332 "timeout_admin_us": 0, 00:19:02.332 "keep_alive_timeout_ms": 10000, 00:19:02.332 "arbitration_burst": 0, 00:19:02.332 "low_priority_weight": 0, 00:19:02.332 "medium_priority_weight": 0, 00:19:02.332 "high_priority_weight": 0, 00:19:02.332 "nvme_adminq_poll_period_us": 10000, 00:19:02.332 "nvme_ioq_poll_period_us": 0, 00:19:02.332 "io_queue_requests": 512, 00:19:02.332 "delay_cmd_submit": true, 00:19:02.332 "transport_retry_count": 4, 00:19:02.332 "bdev_retry_count": 3, 00:19:02.332 "transport_ack_timeout": 0, 00:19:02.332 "ctrlr_loss_timeout_sec": 0, 00:19:02.332 "reconnect_delay_sec": 0, 00:19:02.332 "fast_io_fail_timeout_sec": 0, 00:19:02.332 "disable_auto_failback": false, 00:19:02.332 "generate_uuids": false, 00:19:02.332 "transport_tos": 0, 00:19:02.332 "nvme_error_stat": false, 00:19:02.332 "rdma_srq_size": 0, 00:19:02.332 "io_path_stat": false, 00:19:02.332 "allow_accel_sequence": false, 00:19:02.332 "rdma_max_cq_size": 0, 00:19:02.332 "rdma_cm_event_timeout_ms": 0, 00:19:02.332 "dhchap_digests": [ 00:19:02.332 "sha256", 00:19:02.332 "sha384", 00:19:02.332 "sha512" 00:19:02.332 ], 00:19:02.332 "dhchap_dhgroups": [ 00:19:02.332 "null", 00:19:02.332 "ffdhe2048", 00:19:02.332 "ffdhe3072", 00:19:02.332 "ffdhe4096", 00:19:02.332 "ffdhe6144", 00:19:02.332 "ffdhe8192" 00:19:02.332 ] 00:19:02.332 } 00:19:02.332 }, 00:19:02.332 { 00:19:02.332 "method": "bdev_nvme_attach_controller", 00:19:02.332 "params": { 00:19:02.332 "name": "nvme0", 00:19:02.332 "trtype": "TCP", 00:19:02.332 "adrfam": "IPv4", 00:19:02.332 "traddr": "10.0.0.3", 00:19:02.332 "trsvcid": "4420", 00:19:02.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.332 "prchk_reftag": false, 00:19:02.332 "prchk_guard": false, 00:19:02.332 "ctrlr_loss_timeout_sec": 0, 00:19:02.332 "reconnect_delay_sec": 0, 00:19:02.332 "fast_io_fail_timeout_sec": 0, 00:19:02.332 "psk": "key0", 00:19:02.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:02.332 "hdgst": false, 00:19:02.332 "ddgst": false, 00:19:02.332 "multipath": "multipath" 00:19:02.332 } 00:19:02.332 }, 00:19:02.332 { 00:19:02.332 "method": "bdev_nvme_set_hotplug", 00:19:02.332 "params": { 00:19:02.332 "period_us": 100000, 00:19:02.332 "enable": false 00:19:02.332 } 00:19:02.332 }, 00:19:02.332 { 00:19:02.332 "method": "bdev_enable_histogram", 00:19:02.332 "params": { 00:19:02.332 "name": "nvme0n1", 00:19:02.332 "enable": true 00:19:02.332 } 00:19:02.332 }, 00:19:02.332 { 00:19:02.332 "method": "bdev_wait_for_examine" 00:19:02.332 } 00:19:02.332 ] 00:19:02.332 }, 00:19:02.332 { 00:19:02.332 "subsystem": "nbd", 00:19:02.332 "config": [] 00:19:02.332 } 00:19:02.332 ] 00:19:02.332 }' 00:19:02.332 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72827 00:19:02.332 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72827 ']' 00:19:02.332 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72827 00:19:02.332 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:02.332 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:02.332 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72827 00:19:02.332 killing process with pid 72827 00:19:02.332 Received shutdown signal, test time was about 1.000000 seconds 00:19:02.332 00:19:02.332 Latency(us) 00:19:02.332 [2024-11-20T05:29:16.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.332 [2024-11-20T05:29:16.845Z] =================================================================================================================== 00:19:02.332 [2024-11-20T05:29:16.845Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:02.332 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:02.332 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:02.332 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72827' 00:19:02.332 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72827 00:19:02.332 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72827 00:19:02.332 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72807 00:19:02.332 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72807 ']' 00:19:02.332 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72807 00:19:02.332 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:02.332 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:02.332 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72807 00:19:02.332 killing process with pid 72807 00:19:02.332 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:02.332 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:02.332 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72807' 00:19:02.332 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72807 00:19:02.332 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72807 00:19:02.590 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:02.590 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:02.590 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:02.590 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.590 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:02.590 "subsystems": [ 00:19:02.590 { 00:19:02.590 "subsystem": "keyring", 00:19:02.590 "config": [ 00:19:02.590 { 00:19:02.590 "method": "keyring_file_add_key", 00:19:02.590 "params": { 00:19:02.590 "name": "key0", 00:19:02.590 "path": "/tmp/tmp.vy9cFWYoW2" 00:19:02.590 } 00:19:02.590 } 00:19:02.590 ] 00:19:02.590 }, 00:19:02.590 { 00:19:02.590 "subsystem": "iobuf", 00:19:02.590 "config": [ 00:19:02.590 { 00:19:02.590 "method": "iobuf_set_options", 00:19:02.590 "params": { 00:19:02.590 "small_pool_count": 8192, 00:19:02.590 "large_pool_count": 1024, 00:19:02.590 "small_bufsize": 8192, 00:19:02.590 "large_bufsize": 135168, 00:19:02.590 "enable_numa": false 00:19:02.590 } 00:19:02.590 } 00:19:02.590 ] 00:19:02.590 }, 00:19:02.590 { 00:19:02.590 "subsystem": "sock", 00:19:02.590 "config": [ 00:19:02.590 { 00:19:02.590 "method": "sock_set_default_impl", 00:19:02.590 "params": { 00:19:02.591 "impl_name": "uring" 00:19:02.591 } 00:19:02.591 }, 00:19:02.591 { 00:19:02.591 "method": "sock_impl_set_options", 00:19:02.591 "params": { 00:19:02.591 "impl_name": "ssl", 00:19:02.591 "recv_buf_size": 4096, 00:19:02.591 "send_buf_size": 4096, 00:19:02.591 "enable_recv_pipe": true, 00:19:02.591 "enable_quickack": false, 00:19:02.591 "enable_placement_id": 0, 00:19:02.591 "enable_zerocopy_send_server": true, 00:19:02.591 "enable_zerocopy_send_client": false, 00:19:02.591 "zerocopy_threshold": 0, 00:19:02.591 "tls_version": 0, 00:19:02.591 "enable_ktls": false 00:19:02.591 } 00:19:02.591 }, 00:19:02.591 { 00:19:02.591 "method": "sock_impl_set_options", 00:19:02.591 "params": { 00:19:02.591 "impl_name": "posix", 00:19:02.591 "recv_buf_size": 2097152, 00:19:02.591 "send_buf_size": 2097152, 00:19:02.591 "enable_recv_pipe": true, 00:19:02.591 "enable_quickack": false, 00:19:02.591 "enable_placement_id": 0, 00:19:02.591 "enable_zerocopy_send_server": true, 00:19:02.591 "enable_zerocopy_send_client": false, 00:19:02.591 "zerocopy_threshold": 0, 00:19:02.591 "tls_version": 0, 00:19:02.591 "enable_ktls": false 00:19:02.591 } 00:19:02.591 }, 00:19:02.591 { 00:19:02.591 "method": "sock_impl_set_options", 00:19:02.591 "params": { 00:19:02.591 "impl_name": "uring", 00:19:02.591 "recv_buf_size": 2097152, 00:19:02.591 "send_buf_size": 2097152, 00:19:02.591 "enable_recv_pipe": true, 00:19:02.591 "enable_quickack": false, 00:19:02.591 "enable_placement_id": 0, 00:19:02.591 "enable_zerocopy_send_server": false, 00:19:02.591 "enable_zerocopy_send_client": false, 00:19:02.591 "zerocopy_threshold": 0, 00:19:02.591 "tls_version": 0, 00:19:02.591 "enable_ktls": false 00:19:02.591 } 00:19:02.591 } 00:19:02.591 ] 00:19:02.591 }, 00:19:02.591 { 00:19:02.591 "subsystem": "vmd", 00:19:02.591 "config": [] 00:19:02.591 }, 00:19:02.591 { 00:19:02.591 "subsystem": "accel", 00:19:02.591 "config": [ 00:19:02.591 { 00:19:02.591 "method": "accel_set_options", 00:19:02.591 "params": { 00:19:02.591 "small_cache_size": 128, 00:19:02.591 "large_cache_size": 16, 00:19:02.591 "task_count": 2048, 00:19:02.591 "sequence_count": 2048, 00:19:02.591 "buf_count": 2048 00:19:02.591 } 00:19:02.591 } 00:19:02.591 ] 00:19:02.591 }, 00:19:02.591 { 00:19:02.591 "subsystem": "bdev", 00:19:02.591 "config": [ 00:19:02.591 { 00:19:02.591 "method": "bdev_set_options", 00:19:02.591 "params": { 00:19:02.591 "bdev_io_pool_size": 65535, 00:19:02.591 "bdev_io_cache_size": 256, 00:19:02.591 "bdev_auto_examine": true, 00:19:02.591 "iobuf_small_cache_size": 128, 00:19:02.591 "iobuf_large_cache_size": 16 00:19:02.591 } 00:19:02.591 }, 00:19:02.591 { 00:19:02.591 "method": "bdev_raid_set_options", 00:19:02.591 "params": { 00:19:02.591 "process_window_size_kb": 1024, 00:19:02.591 "process_max_bandwidth_mb_sec": 0 00:19:02.591 } 00:19:02.591 }, 00:19:02.591 { 00:19:02.591 "method": "bdev_iscsi_set_options", 00:19:02.591 "params": { 00:19:02.591 "timeout_sec": 30 00:19:02.591 } 00:19:02.591 }, 00:19:02.591 { 00:19:02.591 "method": "bdev_nvme_set_options", 00:19:02.591 "params": { 00:19:02.591 "action_on_timeout": "none", 00:19:02.591 "timeout_us": 0, 00:19:02.591 "timeout_admin_us": 0, 00:19:02.591 "keep_alive_timeout_ms": 10000, 00:19:02.591 "arbitration_burst": 0, 00:19:02.591 "low_priority_weight": 0, 00:19:02.591 "medium_priority_weight": 0, 00:19:02.591 "high_priority_weight": 0, 00:19:02.591 "nvme_adminq_poll_period_us": 10000, 00:19:02.591 "nvme_ioq_poll_period_us": 0, 00:19:02.591 "io_queue_requests": 0, 00:19:02.591 "delay_cmd_submit": true, 00:19:02.591 "transport_retry_count": 4, 00:19:02.591 "bdev_retry_count": 3, 00:19:02.591 "transport_ack_timeout": 0, 00:19:02.591 "ctrlr_loss_timeout_sec": 0, 00:19:02.591 "reconnect_delay_sec": 0, 00:19:02.591 "fast_io_fail_timeout_sec": 0, 00:19:02.591 "disable_auto_failback": false, 00:19:02.591 "generate_uuids": false, 00:19:02.591 "transport_tos": 0, 00:19:02.591 "nvme_error_stat": false, 00:19:02.591 "rdma_srq_size": 0, 00:19:02.591 "io_path_stat": false, 00:19:02.591 "allow_accel_sequence": false, 00:19:02.591 "rdma_max_cq_size": 0, 00:19:02.591 "rdma_cm_event_timeout_ms": 0, 00:19:02.591 "dhchap_digests": [ 00:19:02.591 "sha256", 00:19:02.591 "sha384", 00:19:02.591 "sha512" 00:19:02.591 ], 00:19:02.591 "dhchap_dhgroups": [ 00:19:02.591 "null", 00:19:02.591 "ffdhe2048", 00:19:02.591 "ffdhe3072", 00:19:02.591 "ffdhe4096", 00:19:02.591 "ffdhe6144", 00:19:02.591 "ffdhe8192" 00:19:02.591 ] 00:19:02.591 } 00:19:02.591 }, 00:19:02.591 { 00:19:02.591 "method": "bdev_nvme_set_hotplug", 00:19:02.591 "params": { 00:19:02.591 "period_us": 100000, 00:19:02.591 "enable": false 00:19:02.591 } 00:19:02.591 }, 00:19:02.591 { 00:19:02.591 "method": "bdev_malloc_create", 00:19:02.591 "params": { 00:19:02.591 "name": "malloc0", 00:19:02.591 "num_blocks": 8192, 00:19:02.591 "block_size": 4096, 00:19:02.591 "physical_block_size": 4096, 00:19:02.591 "uuid": "2fd35515-731e-4ee9-93fa-435a6e039f37", 00:19:02.591 "optimal_io_boundary": 0, 00:19:02.591 "md_size": 0, 00:19:02.591 "dif_type": 0, 00:19:02.591 "dif_is_head_of_md": false, 00:19:02.591 "dif_pi_format": 0 00:19:02.591 } 00:19:02.591 }, 00:19:02.591 { 00:19:02.591 "method": "bdev_wait_for_examine" 00:19:02.591 } 00:19:02.591 ] 00:19:02.591 }, 00:19:02.591 { 00:19:02.591 "subsystem": "nbd", 00:19:02.591 "config": [] 00:19:02.591 }, 00:19:02.591 { 00:19:02.591 "subsystem": "scheduler", 00:19:02.591 "config": [ 00:19:02.591 { 00:19:02.591 "method": "framework_set_scheduler", 00:19:02.591 "params": { 00:19:02.591 "name": "static" 00:19:02.591 } 00:19:02.591 } 00:19:02.591 ] 00:19:02.591 }, 00:19:02.591 { 00:19:02.591 "subsystem": "nvmf", 00:19:02.591 "config": [ 00:19:02.591 { 00:19:02.591 "method": "nvmf_set_config", 00:19:02.591 "params": { 00:19:02.591 "discovery_filter": "match_any", 00:19:02.591 "admin_cmd_passthru": { 00:19:02.591 "identify_ctrlr": false 00:19:02.591 }, 00:19:02.591 "dhchap_digests": [ 00:19:02.591 "sha256", 00:19:02.591 "sha384", 00:19:02.591 "sha512" 00:19:02.591 ], 00:19:02.591 "dhchap_dhgroups": [ 00:19:02.591 "null", 00:19:02.591 "ffdhe2048", 00:19:02.591 "ffdhe3072", 00:19:02.591 "ffdhe4096", 00:19:02.591 "ffdhe6144", 00:19:02.591 "ffdhe8192" 00:19:02.591 ] 00:19:02.591 } 00:19:02.591 }, 00:19:02.591 { 00:19:02.591 "method": "nvmf_set_max_subsystems", 00:19:02.591 "params": { 00:19:02.591 "max_subsystems": 1024 00:19:02.591 } 00:19:02.591 }, 00:19:02.591 { 00:19:02.591 "method": "nvmf_set_crdt", 00:19:02.591 "params": { 00:19:02.591 "crdt1": 0, 00:19:02.591 "crdt2": 0, 00:19:02.591 "crdt3": 0 00:19:02.591 } 00:19:02.591 }, 00:19:02.591 { 00:19:02.591 "method": "nvmf_create_transport", 00:19:02.591 "params": { 00:19:02.591 "trtype": "TCP", 00:19:02.591 "max_queue_depth": 128, 00:19:02.591 "max_io_qpairs_per_ctrlr": 127, 00:19:02.591 "in_capsule_data_size": 4096, 00:19:02.591 "max_io_size": 131072, 00:19:02.591 "io_unit_size": 131072, 00:19:02.591 "max_aq_depth": 128, 00:19:02.591 "num_shared_buffers": 511, 00:19:02.591 "buf_cache_size": 4294967295, 00:19:02.591 "dif_insert_or_strip": false, 00:19:02.591 "zcopy": false, 00:19:02.591 "c2h_success": false, 00:19:02.591 "sock_priority": 0, 00:19:02.591 "abort_timeout_sec": 1, 00:19:02.591 "ack_timeout": 0, 00:19:02.591 "data_wr_pool_size": 0 00:19:02.591 } 00:19:02.591 }, 00:19:02.591 { 00:19:02.591 "method": "nvmf_create_subsystem", 00:19:02.591 "params": { 00:19:02.591 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.591 "allow_any_host": false, 00:19:02.591 "serial_number": "00000000000000000000", 00:19:02.591 "model_number": "SPDK bdev Controller", 00:19:02.591 "max_namespaces": 32, 00:19:02.591 "min_cntlid": 1, 00:19:02.591 "max_cntlid": 65519, 00:19:02.591 "ana_reporting": false 00:19:02.591 } 00:19:02.591 }, 00:19:02.591 { 00:19:02.591 "method": "nvmf_subsystem_add_host", 00:19:02.591 "params": { 00:19:02.591 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.592 "host": "nqn.2016-06.io.spdk:host1", 00:19:02.592 "psk": "key0" 00:19:02.592 } 00:19:02.592 }, 00:19:02.592 { 00:19:02.592 "method": "nvmf_subsystem_add_ns", 00:19:02.592 "params": { 00:19:02.592 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.592 "namespace": { 00:19:02.592 "nsid": 1, 00:19:02.592 "bdev_name": "malloc0", 00:19:02.592 "nguid": "2FD35515731E4EE993FA435A6E039F37", 00:19:02.592 "uuid": "2fd35515-731e-4ee9-93fa-435a6e039f37", 00:19:02.592 "no_auto_visible": false 00:19:02.592 } 00:19:02.592 } 00:19:02.592 }, 00:19:02.592 { 00:19:02.592 "method": "nvmf_subsystem_add_listener", 00:19:02.592 "params": { 00:19:02.592 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.592 "listen_address": { 00:19:02.592 "trtype": "TCP", 00:19:02.592 "adrfam": "IPv4", 00:19:02.592 "traddr": "10.0.0.3", 00:19:02.592 "trsvcid": "4420" 00:19:02.592 }, 00:19:02.592 "secure_channel": false, 00:19:02.592 "sock_impl": "ssl" 00:19:02.592 } 00:19:02.592 } 00:19:02.592 ] 00:19:02.592 } 00:19:02.592 ] 00:19:02.592 }' 00:19:02.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.592 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72879 00:19:02.592 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:02.592 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72879 00:19:02.592 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72879 ']' 00:19:02.592 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.592 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:02.592 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.592 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:02.592 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.592 [2024-11-20 05:29:16.961472] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:19:02.592 [2024-11-20 05:29:16.961717] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.850 [2024-11-20 05:29:17.106300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.850 [2024-11-20 05:29:17.138507] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.850 [2024-11-20 05:29:17.138777] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.850 [2024-11-20 05:29:17.138925] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:02.850 [2024-11-20 05:29:17.139117] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:02.851 [2024-11-20 05:29:17.139155] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.851 [2024-11-20 05:29:17.139590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.851 [2024-11-20 05:29:17.283326] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:02.851 [2024-11-20 05:29:17.342348] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:03.109 [2024-11-20 05:29:17.374292] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:03.109 [2024-11-20 05:29:17.374739] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:03.675 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:03.675 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:03.675 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:03.675 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:03.675 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.675 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:03.675 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72917 00:19:03.675 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72917 /var/tmp/bdevperf.sock 00:19:03.675 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72917 ']' 00:19:03.675 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:03.675 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:03.675 "subsystems": [ 00:19:03.675 { 00:19:03.675 "subsystem": "keyring", 00:19:03.675 "config": [ 00:19:03.675 { 00:19:03.675 "method": "keyring_file_add_key", 00:19:03.675 "params": { 00:19:03.675 "name": "key0", 00:19:03.675 "path": "/tmp/tmp.vy9cFWYoW2" 00:19:03.675 } 00:19:03.675 } 00:19:03.675 ] 00:19:03.675 }, 00:19:03.675 { 00:19:03.675 "subsystem": "iobuf", 00:19:03.675 "config": [ 00:19:03.675 { 00:19:03.675 "method": "iobuf_set_options", 00:19:03.675 "params": { 00:19:03.675 "small_pool_count": 8192, 00:19:03.675 "large_pool_count": 1024, 00:19:03.675 "small_bufsize": 8192, 00:19:03.675 "large_bufsize": 135168, 00:19:03.675 "enable_numa": false 00:19:03.675 } 00:19:03.675 } 00:19:03.675 ] 00:19:03.675 }, 00:19:03.675 { 00:19:03.675 "subsystem": "sock", 00:19:03.675 "config": [ 00:19:03.675 { 00:19:03.675 "method": "sock_set_default_impl", 00:19:03.675 "params": { 00:19:03.675 "impl_name": "uring" 00:19:03.675 } 00:19:03.675 }, 00:19:03.675 { 00:19:03.675 "method": "sock_impl_set_options", 00:19:03.675 "params": { 00:19:03.675 "impl_name": "ssl", 00:19:03.675 "recv_buf_size": 4096, 00:19:03.675 "send_buf_size": 4096, 00:19:03.675 "enable_recv_pipe": true, 00:19:03.675 "enable_quickack": false, 00:19:03.675 "enable_placement_id": 0, 00:19:03.675 "enable_zerocopy_send_server": true, 00:19:03.675 "enable_zerocopy_send_client": false, 00:19:03.675 "zerocopy_threshold": 0, 00:19:03.675 "tls_version": 0, 00:19:03.675 "enable_ktls": false 00:19:03.675 } 00:19:03.675 }, 00:19:03.675 { 00:19:03.675 "method": "sock_impl_set_options", 00:19:03.675 "params": { 00:19:03.675 "impl_name": "posix", 00:19:03.675 "recv_buf_size": 2097152, 00:19:03.675 "send_buf_size": 2097152, 00:19:03.675 "enable_recv_pipe": true, 00:19:03.675 "enable_quickack": false, 00:19:03.675 "enable_placement_id": 0, 00:19:03.675 "enable_zerocopy_send_server": true, 00:19:03.675 "enable_zerocopy_send_client": false, 00:19:03.675 "zerocopy_threshold": 0, 00:19:03.675 "tls_version": 0, 00:19:03.675 "enable_ktls": false 00:19:03.675 } 00:19:03.675 }, 00:19:03.675 { 00:19:03.675 "method": "sock_impl_set_options", 00:19:03.675 "params": { 00:19:03.675 "impl_name": "uring", 00:19:03.675 "recv_buf_size": 2097152, 00:19:03.675 "send_buf_size": 2097152, 00:19:03.675 "enable_recv_pipe": true, 00:19:03.675 "enable_quickack": false, 00:19:03.675 "enable_placement_id": 0, 00:19:03.675 "enable_zerocopy_send_server": false, 00:19:03.675 "enable_zerocopy_send_client": false, 00:19:03.675 "zerocopy_threshold": 0, 00:19:03.675 "tls_version": 0, 00:19:03.675 "enable_ktls": false 00:19:03.675 } 00:19:03.676 } 00:19:03.676 ] 00:19:03.676 }, 00:19:03.676 { 00:19:03.676 "subsystem": "vmd", 00:19:03.676 "config": [] 00:19:03.676 }, 00:19:03.676 { 00:19:03.676 "subsystem": "accel", 00:19:03.676 "config": [ 00:19:03.676 { 00:19:03.676 "method": "accel_set_options", 00:19:03.676 "params": { 00:19:03.676 "small_cache_size": 128, 00:19:03.676 "large_cache_size": 16, 00:19:03.676 "task_count": 2048, 00:19:03.676 "sequence_count": 2048, 00:19:03.676 "buf_count": 2048 00:19:03.676 } 00:19:03.676 } 00:19:03.676 ] 00:19:03.676 }, 00:19:03.676 { 00:19:03.676 "subsystem": "bdev", 00:19:03.676 "config": [ 00:19:03.676 { 00:19:03.676 "method": "bdev_set_options", 00:19:03.676 "params": { 00:19:03.676 "bdev_io_pool_size": 65535, 00:19:03.676 "bdev_io_cache_size": 256, 00:19:03.676 "bdev_auto_examine": true, 00:19:03.676 "iobuf_small_cache_size": 128, 00:19:03.676 "iobuf_large_cache_size": 16 00:19:03.676 } 00:19:03.676 }, 00:19:03.676 { 00:19:03.676 "method": "bdev_raid_set_options", 00:19:03.676 "params": { 00:19:03.676 "process_window_size_kb": 1024, 00:19:03.676 "process_max_bandwidth_mb_sec": 0 00:19:03.676 } 00:19:03.676 }, 00:19:03.676 { 00:19:03.676 "method": "bdev_iscsi_set_options", 00:19:03.676 "params": { 00:19:03.676 "timeout_sec": 30 00:19:03.676 } 00:19:03.676 }, 00:19:03.676 { 00:19:03.676 "method": "bdev_nvme_set_options", 00:19:03.676 "params": { 00:19:03.676 "action_on_timeout": "none", 00:19:03.676 "timeout_us": 0, 00:19:03.676 "timeout_admin_us": 0, 00:19:03.676 "keep_alive_timeout_ms": 10000, 00:19:03.676 "arbitration_burst": 0, 00:19:03.676 "low_priority_weight": 0, 00:19:03.676 "medium_priority_weight": 0, 00:19:03.676 "high_priority_weight": 0, 00:19:03.676 "nvme_adminq_poll_period_us": 10000, 00:19:03.676 "nvme_ioq_poll_period_us": 0, 00:19:03.676 "io_queue_requests": 512, 00:19:03.676 "delay_cmd_submit": true, 00:19:03.676 "transport_retry_count": 4, 00:19:03.676 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:03.676 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:03.676 "bdev_retry_count": 3, 00:19:03.676 "transport_ack_timeout": 0, 00:19:03.676 "ctrlr_loss_timeout_sec": 0, 00:19:03.676 "reconnect_delay_sec": 0, 00:19:03.676 "fast_io_fail_timeout_sec": 0, 00:19:03.676 "disable_auto_failback": false, 00:19:03.676 "generate_uuids": false, 00:19:03.676 "transport_tos": 0, 00:19:03.676 "nvme_error_stat": false, 00:19:03.676 "rdma_srq_size": 0, 00:19:03.676 "io_path_stat": false, 00:19:03.676 "allow_accel_sequence": false, 00:19:03.676 "rdma_max_cq_size": 0, 00:19:03.676 "rdma_cm_event_timeout_ms": 0, 00:19:03.676 "dhchap_digests": [ 00:19:03.676 "sha256", 00:19:03.676 "sha384", 00:19:03.676 "sha512" 00:19:03.676 ], 00:19:03.676 "dhchap_dhgroups": [ 00:19:03.676 "null", 00:19:03.676 "ffdhe2048", 00:19:03.676 "ffdhe3072", 00:19:03.676 "ffdhe4096", 00:19:03.676 "ffdhe6144", 00:19:03.676 "ffdhe8192" 00:19:03.676 ] 00:19:03.676 } 00:19:03.676 }, 00:19:03.676 { 00:19:03.676 "method": "bdev_nvme_attach_controller", 00:19:03.676 "params": { 00:19:03.676 "name": "nvme0", 00:19:03.676 "trtype": "TCP", 00:19:03.676 "adrfam": "IPv4", 00:19:03.676 "traddr": "10.0.0.3", 00:19:03.676 "trsvcid": "4420", 00:19:03.676 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.676 "prchk_reftag": false, 00:19:03.676 "prchk_guard": false, 00:19:03.676 "ctrlr_loss_timeout_sec": 0, 00:19:03.676 "reconnect_delay_sec": 0, 00:19:03.676 "fast_io_fail_timeout_sec": 0, 00:19:03.676 "psk": "key0", 00:19:03.676 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:03.676 "hdgst": false, 00:19:03.676 "ddgst": false, 00:19:03.676 "multipath": "multipath" 00:19:03.676 } 00:19:03.676 }, 00:19:03.676 { 00:19:03.676 "method": "bdev_nvme_set_hotplug", 00:19:03.676 "params": { 00:19:03.676 "period_us": 100000, 00:19:03.676 "enable": false 00:19:03.676 } 00:19:03.676 }, 00:19:03.676 { 00:19:03.676 "method": "bdev_enable_histogram", 00:19:03.676 "params": { 00:19:03.676 "name": "nvme0n1", 00:19:03.676 "enable": true 00:19:03.676 } 00:19:03.676 }, 00:19:03.676 { 00:19:03.676 "method": "bdev_wait_for_examine" 00:19:03.676 } 00:19:03.676 ] 00:19:03.676 }, 00:19:03.676 { 00:19:03.676 "subsystem": "nbd", 00:19:03.676 "config": [] 00:19:03.676 } 00:19:03.676 ] 00:19:03.676 }' 00:19:03.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:03.676 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:03.676 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:03.676 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.676 [2024-11-20 05:29:18.125697] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:19:03.676 [2024-11-20 05:29:18.126070] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72917 ] 00:19:03.934 [2024-11-20 05:29:18.277899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.934 [2024-11-20 05:29:18.312038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.934 [2024-11-20 05:29:18.423778] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:04.192 [2024-11-20 05:29:18.456955] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:04.192 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:04.192 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:04.192 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:04.192 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:04.450 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.450 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:04.708 Running I/O for 1 seconds... 00:19:05.641 3707.00 IOPS, 14.48 MiB/s 00:19:05.641 Latency(us) 00:19:05.641 [2024-11-20T05:29:20.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.641 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:05.641 Verification LBA range: start 0x0 length 0x2000 00:19:05.641 nvme0n1 : 1.02 3755.20 14.67 0.00 0.00 33706.59 6672.76 35508.60 00:19:05.641 [2024-11-20T05:29:20.154Z] =================================================================================================================== 00:19:05.641 [2024-11-20T05:29:20.154Z] Total : 3755.20 14.67 0.00 0.00 33706.59 6672.76 35508.60 00:19:05.641 { 00:19:05.641 "results": [ 00:19:05.641 { 00:19:05.641 "job": "nvme0n1", 00:19:05.641 "core_mask": "0x2", 00:19:05.641 "workload": "verify", 00:19:05.641 "status": "finished", 00:19:05.641 "verify_range": { 00:19:05.641 "start": 0, 00:19:05.641 "length": 8192 00:19:05.641 }, 00:19:05.641 "queue_depth": 128, 00:19:05.641 "io_size": 4096, 00:19:05.641 "runtime": 1.02125, 00:19:05.641 "iops": 3755.201958384333, 00:19:05.641 "mibps": 14.6687576499388, 00:19:05.641 "io_failed": 0, 00:19:05.641 "io_timeout": 0, 00:19:05.641 "avg_latency_us": 33706.58551048951, 00:19:05.641 "min_latency_us": 6672.756363636364, 00:19:05.641 "max_latency_us": 35508.59636363637 00:19:05.641 } 00:19:05.641 ], 00:19:05.641 "core_count": 1 00:19:05.641 } 00:19:05.641 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:05.641 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:05.641 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:05.641 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:19:05.641 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:19:05.641 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:19:05.641 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:05.641 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:19:05.641 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:19:05.641 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:19:05.641 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:05.641 nvmf_trace.0 00:19:05.899 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:19:05.899 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72917 00:19:05.899 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72917 ']' 00:19:05.899 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72917 00:19:05.899 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:05.899 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:05.899 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72917 00:19:05.899 killing process with pid 72917 00:19:05.899 Received shutdown signal, test time was about 1.000000 seconds 00:19:05.899 00:19:05.899 Latency(us) 00:19:05.899 [2024-11-20T05:29:20.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.899 [2024-11-20T05:29:20.412Z] =================================================================================================================== 00:19:05.899 [2024-11-20T05:29:20.412Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:05.899 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:05.899 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:05.899 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72917' 00:19:05.899 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72917 00:19:05.899 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72917 00:19:05.899 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:05.899 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:05.899 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:06.157 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:06.157 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:06.157 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:06.157 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:06.157 rmmod nvme_tcp 00:19:06.157 rmmod nvme_fabrics 00:19:06.157 rmmod nvme_keyring 00:19:06.157 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:06.157 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:06.157 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:06.157 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72879 ']' 00:19:06.157 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72879 00:19:06.157 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72879 ']' 00:19:06.157 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72879 00:19:06.157 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:06.157 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:06.157 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72879 00:19:06.157 killing process with pid 72879 00:19:06.157 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:06.157 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:06.157 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72879' 00:19:06.157 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72879 00:19:06.157 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72879 00:19:06.157 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:06.157 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:06.157 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:06.157 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:06.157 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:19:06.157 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:06.157 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:19:06.157 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:06.157 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:06.157 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:06.415 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:06.415 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:06.415 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:06.415 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:06.415 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:06.415 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:06.415 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:06.415 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:06.415 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:06.415 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:06.415 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:06.415 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:06.415 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:06.415 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.415 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:06.415 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.415 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:19:06.415 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.wlfSn2Prba /tmp/tmp.j480GJB7h3 /tmp/tmp.vy9cFWYoW2 00:19:06.415 ************************************ 00:19:06.415 END TEST nvmf_tls 00:19:06.415 ************************************ 00:19:06.415 00:19:06.415 real 1m24.569s 00:19:06.415 user 2m21.183s 00:19:06.415 sys 0m25.944s 00:19:06.415 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:06.415 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.415 05:29:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:06.415 05:29:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:06.415 05:29:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:06.415 05:29:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:06.415 ************************************ 00:19:06.415 START TEST nvmf_fips 00:19:06.415 ************************************ 00:19:06.415 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:06.675 * Looking for test storage... 00:19:06.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:19:06.675 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:06.675 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:06.675 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:06.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.675 --rc genhtml_branch_coverage=1 00:19:06.675 --rc genhtml_function_coverage=1 00:19:06.675 --rc genhtml_legend=1 00:19:06.675 --rc geninfo_all_blocks=1 00:19:06.675 --rc geninfo_unexecuted_blocks=1 00:19:06.675 00:19:06.675 ' 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:06.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.675 --rc genhtml_branch_coverage=1 00:19:06.675 --rc genhtml_function_coverage=1 00:19:06.675 --rc genhtml_legend=1 00:19:06.675 --rc geninfo_all_blocks=1 00:19:06.675 --rc geninfo_unexecuted_blocks=1 00:19:06.675 00:19:06.675 ' 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:06.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.675 --rc genhtml_branch_coverage=1 00:19:06.675 --rc genhtml_function_coverage=1 00:19:06.675 --rc genhtml_legend=1 00:19:06.675 --rc geninfo_all_blocks=1 00:19:06.675 --rc geninfo_unexecuted_blocks=1 00:19:06.675 00:19:06.675 ' 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:06.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.675 --rc genhtml_branch_coverage=1 00:19:06.675 --rc genhtml_function_coverage=1 00:19:06.675 --rc genhtml_legend=1 00:19:06.675 --rc geninfo_all_blocks=1 00:19:06.675 --rc geninfo_unexecuted_blocks=1 00:19:06.675 00:19:06.675 ' 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:06.675 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:06.675 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:06.676 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:19:06.935 Error setting digest 00:19:06.935 4092164C3C7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:06.935 4092164C3C7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:06.935 Cannot find device "nvmf_init_br" 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:19:06.935 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:06.935 Cannot find device "nvmf_init_br2" 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:06.936 Cannot find device "nvmf_tgt_br" 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:06.936 Cannot find device "nvmf_tgt_br2" 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:06.936 Cannot find device "nvmf_init_br" 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:06.936 Cannot find device "nvmf_init_br2" 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:06.936 Cannot find device "nvmf_tgt_br" 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:06.936 Cannot find device "nvmf_tgt_br2" 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:06.936 Cannot find device "nvmf_br" 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:06.936 Cannot find device "nvmf_init_if" 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:06.936 Cannot find device "nvmf_init_if2" 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:06.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:06.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:06.936 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:07.195 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:07.195 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:19:07.195 00:19:07.195 --- 10.0.0.3 ping statistics --- 00:19:07.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.195 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:07.195 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:07.195 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:19:07.195 00:19:07.195 --- 10.0.0.4 ping statistics --- 00:19:07.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.195 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:07.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:07.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:07.195 00:19:07.195 --- 10.0.0.1 ping statistics --- 00:19:07.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.195 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:07.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:07.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:19:07.195 00:19:07.195 --- 10.0.0.2 ping statistics --- 00:19:07.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.195 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:07.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=73233 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 73233 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 73233 ']' 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:07.195 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:07.455 [2024-11-20 05:29:21.721101] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:19:07.455 [2024-11-20 05:29:21.721880] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.455 [2024-11-20 05:29:21.870856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.455 [2024-11-20 05:29:21.903801] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.455 [2024-11-20 05:29:21.904067] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.455 [2024-11-20 05:29:21.904231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.455 [2024-11-20 05:29:21.904351] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.455 [2024-11-20 05:29:21.904387] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.455 [2024-11-20 05:29:21.904776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.455 [2024-11-20 05:29:21.935376] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:08.390 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:08.390 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:19:08.390 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:08.390 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:08.390 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:08.390 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.390 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:08.390 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:08.390 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:08.390 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.5M4 00:19:08.390 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:08.390 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.5M4 00:19:08.390 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.5M4 00:19:08.390 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.5M4 00:19:08.390 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:08.649 [2024-11-20 05:29:23.016017] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:08.649 [2024-11-20 05:29:23.031972] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:08.649 [2024-11-20 05:29:23.032202] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:08.649 malloc0 00:19:08.649 05:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:08.649 05:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=73276 00:19:08.649 05:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:08.649 05:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 73276 /var/tmp/bdevperf.sock 00:19:08.649 05:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 73276 ']' 00:19:08.649 05:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:08.649 05:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:08.649 05:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:08.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:08.649 05:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:08.649 05:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:08.907 [2024-11-20 05:29:23.165751] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:19:08.907 [2024-11-20 05:29:23.166227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73276 ] 00:19:08.907 [2024-11-20 05:29:23.309738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.907 [2024-11-20 05:29:23.355926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:08.907 [2024-11-20 05:29:23.392167] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:09.840 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:09.840 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:19:09.840 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.5M4 00:19:10.097 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:10.355 [2024-11-20 05:29:24.817860] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:10.676 TLSTESTn1 00:19:10.676 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:10.676 Running I/O for 10 seconds... 00:19:12.541 3590.00 IOPS, 14.02 MiB/s [2024-11-20T05:29:28.428Z] 3744.00 IOPS, 14.62 MiB/s [2024-11-20T05:29:29.380Z] 3752.33 IOPS, 14.66 MiB/s [2024-11-20T05:29:30.313Z] 3711.50 IOPS, 14.50 MiB/s [2024-11-20T05:29:31.280Z] 3563.40 IOPS, 13.92 MiB/s [2024-11-20T05:29:32.242Z] 3528.83 IOPS, 13.78 MiB/s [2024-11-20T05:29:33.176Z] 3519.86 IOPS, 13.75 MiB/s [2024-11-20T05:29:34.110Z] 3433.62 IOPS, 13.41 MiB/s [2024-11-20T05:29:35.044Z] 3471.78 IOPS, 13.56 MiB/s [2024-11-20T05:29:35.302Z] 3452.40 IOPS, 13.49 MiB/s 00:19:20.789 Latency(us) 00:19:20.789 [2024-11-20T05:29:35.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.789 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:20.789 Verification LBA range: start 0x0 length 0x2000 00:19:20.789 TLSTESTn1 : 10.02 3458.25 13.51 0.00 0.00 36948.67 6791.91 35746.91 00:19:20.789 [2024-11-20T05:29:35.302Z] =================================================================================================================== 00:19:20.789 [2024-11-20T05:29:35.302Z] Total : 3458.25 13.51 0.00 0.00 36948.67 6791.91 35746.91 00:19:20.789 { 00:19:20.789 "results": [ 00:19:20.789 { 00:19:20.789 "job": "TLSTESTn1", 00:19:20.789 "core_mask": "0x4", 00:19:20.789 "workload": "verify", 00:19:20.789 "status": "finished", 00:19:20.789 "verify_range": { 00:19:20.789 "start": 0, 00:19:20.789 "length": 8192 00:19:20.789 }, 00:19:20.789 "queue_depth": 128, 00:19:20.789 "io_size": 4096, 00:19:20.789 "runtime": 10.019794, 00:19:20.789 "iops": 3458.2547305862777, 00:19:20.789 "mibps": 13.508807541352647, 00:19:20.789 "io_failed": 0, 00:19:20.789 "io_timeout": 0, 00:19:20.789 "avg_latency_us": 36948.67165024753, 00:19:20.789 "min_latency_us": 6791.912727272727, 00:19:20.789 "max_latency_us": 35746.90909090909 00:19:20.789 } 00:19:20.789 ], 00:19:20.789 "core_count": 1 00:19:20.789 } 00:19:20.789 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:20.789 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:20.789 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:19:20.789 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:19:20.789 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:19:20.789 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:20.789 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:19:20.789 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:19:20.789 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:19:20.789 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:20.789 nvmf_trace.0 00:19:20.789 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:19:20.789 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73276 00:19:20.789 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 73276 ']' 00:19:20.789 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 73276 00:19:20.789 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:19:20.789 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:20.789 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73276 00:19:20.789 killing process with pid 73276 00:19:20.789 Received shutdown signal, test time was about 10.000000 seconds 00:19:20.789 00:19:20.789 Latency(us) 00:19:20.789 [2024-11-20T05:29:35.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.789 [2024-11-20T05:29:35.302Z] =================================================================================================================== 00:19:20.789 [2024-11-20T05:29:35.302Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:20.789 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:20.789 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:20.789 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73276' 00:19:20.789 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 73276 00:19:20.789 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 73276 00:19:21.048 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:21.048 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:21.048 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:21.048 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:21.048 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:21.048 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:21.048 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:21.048 rmmod nvme_tcp 00:19:21.048 rmmod nvme_fabrics 00:19:21.048 rmmod nvme_keyring 00:19:21.048 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:21.048 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:21.048 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:21.048 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 73233 ']' 00:19:21.048 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 73233 00:19:21.048 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 73233 ']' 00:19:21.048 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 73233 00:19:21.048 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:19:21.048 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:21.048 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73233 00:19:21.048 killing process with pid 73233 00:19:21.048 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:21.048 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:21.048 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73233' 00:19:21.048 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 73233 00:19:21.048 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 73233 00:19:21.306 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:21.306 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:21.306 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:21.306 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:21.306 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:21.306 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:21.306 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:21.306 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:21.306 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:21.306 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:21.306 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:21.306 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:21.306 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:21.306 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:21.306 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:21.306 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:21.306 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:21.306 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:21.306 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:21.306 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:21.306 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:21.565 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:21.565 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:21.565 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.565 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:21.565 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.565 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:19:21.565 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.5M4 00:19:21.565 ************************************ 00:19:21.565 END TEST nvmf_fips 00:19:21.565 ************************************ 00:19:21.565 00:19:21.565 real 0m14.945s 00:19:21.565 user 0m21.296s 00:19:21.565 sys 0m5.536s 00:19:21.565 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:21.565 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:21.565 05:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:21.565 05:29:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:21.565 05:29:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:21.565 05:29:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:21.565 ************************************ 00:19:21.565 START TEST nvmf_control_msg_list 00:19:21.565 ************************************ 00:19:21.565 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:21.565 * Looking for test storage... 00:19:21.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:21.565 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:21.565 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:19:21.565 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:21.825 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:21.825 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:21.825 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:21.825 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:21.825 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:21.825 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:21.825 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:21.825 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:21.825 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:21.825 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:21.825 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:21.825 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:21.825 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:21.825 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:21.825 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:21.825 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:21.825 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:21.825 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:21.825 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:21.825 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:21.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.826 --rc genhtml_branch_coverage=1 00:19:21.826 --rc genhtml_function_coverage=1 00:19:21.826 --rc genhtml_legend=1 00:19:21.826 --rc geninfo_all_blocks=1 00:19:21.826 --rc geninfo_unexecuted_blocks=1 00:19:21.826 00:19:21.826 ' 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:21.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.826 --rc genhtml_branch_coverage=1 00:19:21.826 --rc genhtml_function_coverage=1 00:19:21.826 --rc genhtml_legend=1 00:19:21.826 --rc geninfo_all_blocks=1 00:19:21.826 --rc geninfo_unexecuted_blocks=1 00:19:21.826 00:19:21.826 ' 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:21.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.826 --rc genhtml_branch_coverage=1 00:19:21.826 --rc genhtml_function_coverage=1 00:19:21.826 --rc genhtml_legend=1 00:19:21.826 --rc geninfo_all_blocks=1 00:19:21.826 --rc geninfo_unexecuted_blocks=1 00:19:21.826 00:19:21.826 ' 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:21.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.826 --rc genhtml_branch_coverage=1 00:19:21.826 --rc genhtml_function_coverage=1 00:19:21.826 --rc genhtml_legend=1 00:19:21.826 --rc geninfo_all_blocks=1 00:19:21.826 --rc geninfo_unexecuted_blocks=1 00:19:21.826 00:19:21.826 ' 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:21.826 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:21.826 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:21.827 Cannot find device "nvmf_init_br" 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:21.827 Cannot find device "nvmf_init_br2" 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:21.827 Cannot find device "nvmf_tgt_br" 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:21.827 Cannot find device "nvmf_tgt_br2" 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:21.827 Cannot find device "nvmf_init_br" 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:21.827 Cannot find device "nvmf_init_br2" 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:21.827 Cannot find device "nvmf_tgt_br" 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:21.827 Cannot find device "nvmf_tgt_br2" 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:21.827 Cannot find device "nvmf_br" 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:21.827 Cannot find device "nvmf_init_if" 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:21.827 Cannot find device "nvmf_init_if2" 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:21.827 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:21.827 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:21.827 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:22.086 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:22.086 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:22.086 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:22.086 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:22.086 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:22.086 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:22.086 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:22.086 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:22.086 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:22.086 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:22.086 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:22.086 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:22.086 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:22.086 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:22.086 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:22.086 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:22.086 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:22.086 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:22.086 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:22.086 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:22.086 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:22.086 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:22.086 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:22.086 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:22.086 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:22.086 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:22.086 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:22.086 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:19:22.086 00:19:22.086 --- 10.0.0.3 ping statistics --- 00:19:22.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.086 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:19:22.086 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:22.086 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:22.086 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:19:22.086 00:19:22.086 --- 10.0.0.4 ping statistics --- 00:19:22.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.086 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:19:22.086 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:22.086 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:22.086 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:19:22.086 00:19:22.086 --- 10.0.0.1 ping statistics --- 00:19:22.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.086 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:19:22.086 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:22.086 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:22.086 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:19:22.086 00:19:22.086 --- 10.0.0.2 ping statistics --- 00:19:22.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.086 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:19:22.086 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:22.087 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:19:22.087 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:22.087 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:22.087 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:22.087 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:22.087 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:22.087 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:22.087 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:22.087 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:22.087 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:22.087 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:22.087 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:22.087 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=73657 00:19:22.087 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:22.087 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 73657 00:19:22.087 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 73657 ']' 00:19:22.087 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.087 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:22.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.087 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.087 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:22.087 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:22.345 [2024-11-20 05:29:36.612415] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:19:22.345 [2024-11-20 05:29:36.612800] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.345 [2024-11-20 05:29:36.768457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.345 [2024-11-20 05:29:36.803632] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.345 [2024-11-20 05:29:36.803962] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.345 [2024-11-20 05:29:36.804204] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:22.345 [2024-11-20 05:29:36.804467] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:22.345 [2024-11-20 05:29:36.804487] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.345 [2024-11-20 05:29:36.804799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.345 [2024-11-20 05:29:36.836973] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:22.603 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:22.603 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:19:22.603 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:22.603 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:22.603 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:22.603 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.603 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:22.603 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:19:22.603 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:22.603 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.603 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:22.604 [2024-11-20 05:29:36.956772] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:22.604 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.604 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:22.604 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.604 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:22.604 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.604 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:22.604 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.604 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:22.604 Malloc0 00:19:22.604 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.604 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:22.604 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.604 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:22.604 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.604 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:22.604 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.604 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:22.604 [2024-11-20 05:29:36.992229] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:22.604 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.604 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73677 00:19:22.604 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:22.604 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73679 00:19:22.604 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:22.604 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73682 00:19:22.604 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:22.604 05:29:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73677 00:19:22.921 [2024-11-20 05:29:37.170583] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:22.921 [2024-11-20 05:29:37.180794] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:22.921 [2024-11-20 05:29:37.200819] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:23.855 Initializing NVMe Controllers 00:19:23.855 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:19:23.855 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:23.855 Initialization complete. Launching workers. 00:19:23.855 ======================================================== 00:19:23.855 Latency(us) 00:19:23.855 Device Information : IOPS MiB/s Average min max 00:19:23.855 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 2963.00 11.57 336.83 146.30 4307.76 00:19:23.855 ======================================================== 00:19:23.855 Total : 2963.00 11.57 336.83 146.30 4307.76 00:19:23.855 00:19:23.855 Initializing NVMe Controllers 00:19:23.855 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:19:23.855 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:23.855 Initialization complete. Launching workers. 00:19:23.855 ======================================================== 00:19:23.855 Latency(us) 00:19:23.856 Device Information : IOPS MiB/s Average min max 00:19:23.856 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 2964.00 11.58 336.79 152.44 926.26 00:19:23.856 ======================================================== 00:19:23.856 Total : 2964.00 11.58 336.79 152.44 926.26 00:19:23.856 00:19:23.856 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73679 00:19:23.856 Initializing NVMe Controllers 00:19:23.856 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:19:23.856 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:23.856 Initialization complete. Launching workers. 00:19:23.856 ======================================================== 00:19:23.856 Latency(us) 00:19:23.856 Device Information : IOPS MiB/s Average min max 00:19:23.856 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 2979.94 11.64 335.01 178.78 818.67 00:19:23.856 ======================================================== 00:19:23.856 Total : 2979.94 11.64 335.01 178.78 818.67 00:19:23.856 00:19:23.856 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73682 00:19:23.856 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:23.856 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:23.856 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:23.856 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:23.856 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:23.856 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:23.856 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:23.856 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:23.856 rmmod nvme_tcp 00:19:23.856 rmmod nvme_fabrics 00:19:23.856 rmmod nvme_keyring 00:19:23.856 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:23.856 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:23.856 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:23.856 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 73657 ']' 00:19:23.856 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 73657 00:19:23.856 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 73657 ']' 00:19:23.856 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 73657 00:19:23.856 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:19:23.856 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:23.856 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73657 00:19:24.114 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:24.114 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:24.114 killing process with pid 73657 00:19:24.114 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73657' 00:19:24.114 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 73657 00:19:24.114 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 73657 00:19:24.114 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:24.114 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:24.114 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:24.114 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:24.114 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:24.114 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:24.114 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:24.114 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:24.114 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:24.114 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:24.114 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:24.114 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:24.114 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:24.114 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:24.114 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:24.114 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:24.114 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:24.114 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:24.372 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:24.373 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:24.373 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:24.373 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:24.373 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:24.373 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.373 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:24.373 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.373 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:19:24.373 00:19:24.373 real 0m2.850s 00:19:24.373 user 0m4.718s 00:19:24.373 sys 0m1.347s 00:19:24.373 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:24.373 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:24.373 ************************************ 00:19:24.373 END TEST nvmf_control_msg_list 00:19:24.373 ************************************ 00:19:24.373 05:29:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:24.373 05:29:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:24.373 05:29:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:24.373 05:29:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:24.373 ************************************ 00:19:24.373 START TEST nvmf_wait_for_buf 00:19:24.373 ************************************ 00:19:24.373 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:24.373 * Looking for test storage... 00:19:24.373 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:24.373 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:24.373 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:19:24.373 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:24.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.633 --rc genhtml_branch_coverage=1 00:19:24.633 --rc genhtml_function_coverage=1 00:19:24.633 --rc genhtml_legend=1 00:19:24.633 --rc geninfo_all_blocks=1 00:19:24.633 --rc geninfo_unexecuted_blocks=1 00:19:24.633 00:19:24.633 ' 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:24.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.633 --rc genhtml_branch_coverage=1 00:19:24.633 --rc genhtml_function_coverage=1 00:19:24.633 --rc genhtml_legend=1 00:19:24.633 --rc geninfo_all_blocks=1 00:19:24.633 --rc geninfo_unexecuted_blocks=1 00:19:24.633 00:19:24.633 ' 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:24.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.633 --rc genhtml_branch_coverage=1 00:19:24.633 --rc genhtml_function_coverage=1 00:19:24.633 --rc genhtml_legend=1 00:19:24.633 --rc geninfo_all_blocks=1 00:19:24.633 --rc geninfo_unexecuted_blocks=1 00:19:24.633 00:19:24.633 ' 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:24.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.633 --rc genhtml_branch_coverage=1 00:19:24.633 --rc genhtml_function_coverage=1 00:19:24.633 --rc genhtml_legend=1 00:19:24.633 --rc geninfo_all_blocks=1 00:19:24.633 --rc geninfo_unexecuted_blocks=1 00:19:24.633 00:19:24.633 ' 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:24.633 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:24.634 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.634 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.634 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.634 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:24.634 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.634 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:24.634 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:24.634 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:24.634 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:24.634 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:24.634 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:24.634 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:24.634 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:24.634 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:24.634 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:24.634 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:24.634 Cannot find device "nvmf_init_br" 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:24.634 Cannot find device "nvmf_init_br2" 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:24.634 Cannot find device "nvmf_tgt_br" 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:24.634 Cannot find device "nvmf_tgt_br2" 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:24.634 Cannot find device "nvmf_init_br" 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:24.634 Cannot find device "nvmf_init_br2" 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:24.634 Cannot find device "nvmf_tgt_br" 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:24.634 Cannot find device "nvmf_tgt_br2" 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:24.634 Cannot find device "nvmf_br" 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:24.634 Cannot find device "nvmf_init_if" 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:24.634 Cannot find device "nvmf_init_if2" 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:24.634 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:24.634 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:24.634 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:24.906 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:24.906 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:19:24.906 00:19:24.906 --- 10.0.0.3 ping statistics --- 00:19:24.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.906 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:24.906 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:24.906 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:19:24.906 00:19:24.906 --- 10.0.0.4 ping statistics --- 00:19:24.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.906 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:24.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:24.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:19:24.906 00:19:24.906 --- 10.0.0.1 ping statistics --- 00:19:24.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.906 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:24.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:24.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:19:24.906 00:19:24.906 --- 10.0.0.2 ping statistics --- 00:19:24.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.906 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73914 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73914 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 73914 ']' 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:24.906 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.168 [2024-11-20 05:29:39.438426] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:19:25.168 [2024-11-20 05:29:39.439274] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.168 [2024-11-20 05:29:39.590163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.168 [2024-11-20 05:29:39.637360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.168 [2024-11-20 05:29:39.637444] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.168 [2024-11-20 05:29:39.637463] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:25.168 [2024-11-20 05:29:39.637476] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:25.168 [2024-11-20 05:29:39.637486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.168 [2024-11-20 05:29:39.637879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.427 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:25.427 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:19:25.427 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:25.427 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:25.427 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.427 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.428 [2024-11-20 05:29:39.809259] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.428 Malloc0 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.428 [2024-11-20 05:29:39.852241] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.428 [2024-11-20 05:29:39.876383] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.428 05:29:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:25.686 [2024-11-20 05:29:40.087080] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:27.061 Initializing NVMe Controllers 00:19:27.061 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:19:27.061 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:27.061 Initialization complete. Launching workers. 00:19:27.061 ======================================================== 00:19:27.061 Latency(us) 00:19:27.061 Device Information : IOPS MiB/s Average min max 00:19:27.061 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 499.98 62.50 8000.20 7765.28 8294.04 00:19:27.061 ======================================================== 00:19:27.061 Total : 499.98 62.50 8000.20 7765.28 8294.04 00:19:27.061 00:19:27.061 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:27.061 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.061 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:27.061 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:27.061 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.061 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:19:27.061 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:19:27.061 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:27.061 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:27.061 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:27.061 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:27.061 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:27.061 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:27.061 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:27.061 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:27.061 rmmod nvme_tcp 00:19:27.061 rmmod nvme_fabrics 00:19:27.061 rmmod nvme_keyring 00:19:27.061 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:27.061 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:27.061 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:27.061 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73914 ']' 00:19:27.061 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73914 00:19:27.061 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 73914 ']' 00:19:27.061 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 73914 00:19:27.061 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:19:27.061 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:27.061 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73914 00:19:27.320 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:27.320 killing process with pid 73914 00:19:27.320 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:27.320 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73914' 00:19:27.320 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 73914 00:19:27.320 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 73914 00:19:27.320 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:27.320 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:27.320 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:27.320 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:27.320 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:27.320 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:27.320 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:27.320 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:27.320 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:27.320 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:27.320 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:27.320 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:27.320 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:27.320 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:27.320 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:27.320 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:27.320 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:27.320 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:27.579 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:27.579 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:27.579 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:27.579 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:27.579 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:27.579 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.579 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:27.579 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.579 ************************************ 00:19:27.579 END TEST nvmf_wait_for_buf 00:19:27.579 ************************************ 00:19:27.579 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:19:27.579 00:19:27.579 real 0m3.180s 00:19:27.579 user 0m2.587s 00:19:27.579 sys 0m0.754s 00:19:27.579 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:27.579 05:29:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:27.579 05:29:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:27.579 05:29:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:19:27.579 05:29:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:19:27.579 05:29:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:27.579 05:29:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:27.579 05:29:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:27.579 ************************************ 00:19:27.579 START TEST nvmf_nsid 00:19:27.579 ************************************ 00:19:27.579 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:19:27.839 * Looking for test storage... 00:19:27.839 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:27.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.839 --rc genhtml_branch_coverage=1 00:19:27.839 --rc genhtml_function_coverage=1 00:19:27.839 --rc genhtml_legend=1 00:19:27.839 --rc geninfo_all_blocks=1 00:19:27.839 --rc geninfo_unexecuted_blocks=1 00:19:27.839 00:19:27.839 ' 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:27.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.839 --rc genhtml_branch_coverage=1 00:19:27.839 --rc genhtml_function_coverage=1 00:19:27.839 --rc genhtml_legend=1 00:19:27.839 --rc geninfo_all_blocks=1 00:19:27.839 --rc geninfo_unexecuted_blocks=1 00:19:27.839 00:19:27.839 ' 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:27.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.839 --rc genhtml_branch_coverage=1 00:19:27.839 --rc genhtml_function_coverage=1 00:19:27.839 --rc genhtml_legend=1 00:19:27.839 --rc geninfo_all_blocks=1 00:19:27.839 --rc geninfo_unexecuted_blocks=1 00:19:27.839 00:19:27.839 ' 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:27.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.839 --rc genhtml_branch_coverage=1 00:19:27.839 --rc genhtml_function_coverage=1 00:19:27.839 --rc genhtml_legend=1 00:19:27.839 --rc geninfo_all_blocks=1 00:19:27.839 --rc geninfo_unexecuted_blocks=1 00:19:27.839 00:19:27.839 ' 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:27.839 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:27.840 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:27.840 Cannot find device "nvmf_init_br" 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:27.840 Cannot find device "nvmf_init_br2" 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:19:27.840 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:27.841 Cannot find device "nvmf_tgt_br" 00:19:27.841 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:19:27.841 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:27.841 Cannot find device "nvmf_tgt_br2" 00:19:27.841 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:19:27.841 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:27.841 Cannot find device "nvmf_init_br" 00:19:27.841 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:19:27.841 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:27.841 Cannot find device "nvmf_init_br2" 00:19:27.841 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:19:27.841 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:27.841 Cannot find device "nvmf_tgt_br" 00:19:27.841 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:19:27.841 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:27.841 Cannot find device "nvmf_tgt_br2" 00:19:27.841 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:19:27.841 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:28.099 Cannot find device "nvmf_br" 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:28.099 Cannot find device "nvmf_init_if" 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:28.099 Cannot find device "nvmf_init_if2" 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:28.099 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:28.099 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:28.099 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:28.358 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:28.358 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:19:28.358 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:28.358 --- 10.0.0.3 ping statistics --- 00:19:28.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.358 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:28.358 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:28.358 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:19:28.358 00:19:28.358 --- 10.0.0.4 ping statistics --- 00:19:28.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.358 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:28.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:28.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:19:28.358 00:19:28.358 --- 10.0.0.1 ping statistics --- 00:19:28.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.358 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:28.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:28.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:19:28.358 00:19:28.358 --- 10.0.0.2 ping statistics --- 00:19:28.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.358 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=74171 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:19:28.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 74171 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 74171 ']' 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:28.358 05:29:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:28.358 [2024-11-20 05:29:42.808406] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:19:28.358 [2024-11-20 05:29:42.808527] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.617 [2024-11-20 05:29:42.962475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.617 [2024-11-20 05:29:43.007304] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.617 [2024-11-20 05:29:43.007381] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.617 [2024-11-20 05:29:43.007399] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:28.617 [2024-11-20 05:29:43.007412] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:28.617 [2024-11-20 05:29:43.007423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.617 [2024-11-20 05:29:43.007826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.617 [2024-11-20 05:29:43.043028] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:28.617 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:28.617 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:19:28.617 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:28.617 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:28.617 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=74194 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=581e8f4b-edcb-4a72-a0d6-549eba5e8efd 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=def41390-e2c6-4b84-89d1-23f36b1298de 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=61260bb8-c8f8-4bc1-b594-5150a0498400 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:28.875 null0 00:19:28.875 null1 00:19:28.875 null2 00:19:28.875 [2024-11-20 05:29:43.185038] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:28.875 [2024-11-20 05:29:43.197938] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:19:28.875 [2024-11-20 05:29:43.198031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74194 ] 00:19:28.875 [2024-11-20 05:29:43.209260] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 74194 /var/tmp/tgt2.sock 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 74194 ']' 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:28.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:28.875 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:28.875 [2024-11-20 05:29:43.342965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.134 [2024-11-20 05:29:43.392236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.134 [2024-11-20 05:29:43.444666] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:29.393 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:29.393 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:19:29.393 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:19:29.961 [2024-11-20 05:29:44.311882] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.961 [2024-11-20 05:29:44.328110] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:19:29.961 nvme0n1 nvme0n2 00:19:29.961 nvme1n1 00:19:29.961 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:19:29.961 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:19:29.961 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:19:30.220 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:19:30.220 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:19:30.220 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:19:30.220 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:19:30.220 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:19:30.220 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:19:30.220 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:19:30.220 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:19:30.220 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:19:30.220 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:19:30.220 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:19:30.220 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:19:30.220 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:19:31.155 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:19:31.155 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:19:31.155 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:19:31.155 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:19:31.155 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:19:31.155 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 581e8f4b-edcb-4a72-a0d6-549eba5e8efd 00:19:31.155 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:19:31.155 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:19:31.155 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:19:31.155 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:19:31.155 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:19:31.155 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=581e8f4bedcb4a72a0d6549eba5e8efd 00:19:31.155 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 581E8F4BEDCB4A72A0D6549EBA5E8EFD 00:19:31.155 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 581E8F4BEDCB4A72A0D6549EBA5E8EFD == \5\8\1\E\8\F\4\B\E\D\C\B\4\A\7\2\A\0\D\6\5\4\9\E\B\A\5\E\8\E\F\D ]] 00:19:31.155 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:19:31.155 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:19:31.155 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:19:31.155 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:19:31.155 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:19:31.155 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:19:31.155 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:19:31.155 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid def41390-e2c6-4b84-89d1-23f36b1298de 00:19:31.155 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:19:31.156 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:19:31.156 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:19:31.156 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:19:31.156 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:19:31.156 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=def41390e2c64b8489d123f36b1298de 00:19:31.156 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DEF41390E2C64B8489D123F36B1298DE 00:19:31.156 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ DEF41390E2C64B8489D123F36B1298DE == \D\E\F\4\1\3\9\0\E\2\C\6\4\B\8\4\8\9\D\1\2\3\F\3\6\B\1\2\9\8\D\E ]] 00:19:31.156 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:19:31.156 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:19:31.156 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:19:31.156 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:19:31.414 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:19:31.414 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:19:31.414 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:19:31.414 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 61260bb8-c8f8-4bc1-b594-5150a0498400 00:19:31.414 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:19:31.414 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:19:31.414 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:19:31.414 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:19:31.414 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:19:31.414 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=61260bb8c8f84bc1b5945150a0498400 00:19:31.414 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 61260BB8C8F84BC1B5945150A0498400 00:19:31.414 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 61260BB8C8F84BC1B5945150A0498400 == \6\1\2\6\0\B\B\8\C\8\F\8\4\B\C\1\B\5\9\4\5\1\5\0\A\0\4\9\8\4\0\0 ]] 00:19:31.414 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:19:31.414 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:19:31.414 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:19:31.414 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 74194 00:19:31.414 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 74194 ']' 00:19:31.414 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 74194 00:19:31.414 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:19:31.415 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:31.415 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74194 00:19:31.415 killing process with pid 74194 00:19:31.415 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:31.415 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:31.415 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74194' 00:19:31.415 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 74194 00:19:31.415 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 74194 00:19:31.984 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:19:31.984 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:31.984 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:19:31.984 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:31.984 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:19:31.984 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:31.984 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:31.984 rmmod nvme_tcp 00:19:31.984 rmmod nvme_fabrics 00:19:31.984 rmmod nvme_keyring 00:19:31.984 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:31.984 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:19:31.984 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:19:31.984 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 74171 ']' 00:19:31.984 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 74171 00:19:31.984 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 74171 ']' 00:19:31.984 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 74171 00:19:31.984 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:19:31.985 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:31.985 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74171 00:19:31.985 killing process with pid 74171 00:19:31.985 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:31.985 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:31.985 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74171' 00:19:31.985 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 74171 00:19:31.985 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 74171 00:19:31.985 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:31.985 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:31.985 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:31.985 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:19:31.985 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:31.985 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:19:31.985 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:19:31.985 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:31.985 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:31.985 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:32.243 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:32.243 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:32.243 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:32.243 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:32.243 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:32.243 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:32.243 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:32.243 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:32.243 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:32.243 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:32.243 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:32.243 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:32.243 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:32.243 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.243 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:32.243 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.243 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:19:32.243 ************************************ 00:19:32.243 END TEST nvmf_nsid 00:19:32.243 ************************************ 00:19:32.243 00:19:32.243 real 0m4.650s 00:19:32.243 user 0m7.223s 00:19:32.243 sys 0m1.619s 00:19:32.243 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:32.243 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:32.243 05:29:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:19:32.243 00:19:32.243 real 5m22.185s 00:19:32.243 user 11m29.530s 00:19:32.243 sys 1m7.442s 00:19:32.243 ************************************ 00:19:32.243 END TEST nvmf_target_extra 00:19:32.243 ************************************ 00:19:32.243 05:29:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:32.243 05:29:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:32.503 05:29:46 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:19:32.503 05:29:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:32.503 05:29:46 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:32.503 05:29:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:32.503 ************************************ 00:19:32.503 START TEST nvmf_host 00:19:32.503 ************************************ 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:19:32.503 * Looking for test storage... 00:19:32.503 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:32.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.503 --rc genhtml_branch_coverage=1 00:19:32.503 --rc genhtml_function_coverage=1 00:19:32.503 --rc genhtml_legend=1 00:19:32.503 --rc geninfo_all_blocks=1 00:19:32.503 --rc geninfo_unexecuted_blocks=1 00:19:32.503 00:19:32.503 ' 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:32.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.503 --rc genhtml_branch_coverage=1 00:19:32.503 --rc genhtml_function_coverage=1 00:19:32.503 --rc genhtml_legend=1 00:19:32.503 --rc geninfo_all_blocks=1 00:19:32.503 --rc geninfo_unexecuted_blocks=1 00:19:32.503 00:19:32.503 ' 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:32.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.503 --rc genhtml_branch_coverage=1 00:19:32.503 --rc genhtml_function_coverage=1 00:19:32.503 --rc genhtml_legend=1 00:19:32.503 --rc geninfo_all_blocks=1 00:19:32.503 --rc geninfo_unexecuted_blocks=1 00:19:32.503 00:19:32.503 ' 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:32.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.503 --rc genhtml_branch_coverage=1 00:19:32.503 --rc genhtml_function_coverage=1 00:19:32.503 --rc genhtml_legend=1 00:19:32.503 --rc geninfo_all_blocks=1 00:19:32.503 --rc geninfo_unexecuted_blocks=1 00:19:32.503 00:19:32.503 ' 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:32.503 05:29:46 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.503 05:29:47 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.503 05:29:47 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.503 05:29:47 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.504 05:29:47 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.504 05:29:47 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.504 05:29:47 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:19:32.504 05:29:47 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.504 05:29:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:19:32.504 05:29:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:32.504 05:29:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:32.504 05:29:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:32.504 05:29:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.504 05:29:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.504 05:29:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:32.504 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:32.504 05:29:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:32.504 05:29:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:32.504 05:29:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:32.504 05:29:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:19:32.504 05:29:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:19:32.504 05:29:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:19:32.504 05:29:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:32.504 05:29:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:32.504 05:29:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:32.504 05:29:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.763 ************************************ 00:19:32.763 START TEST nvmf_identify 00:19:32.763 ************************************ 00:19:32.763 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:32.763 * Looking for test storage... 00:19:32.763 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:32.763 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:32.763 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:19:32.763 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:32.763 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:32.763 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:32.763 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:32.763 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:32.763 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:19:32.763 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:19:32.763 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:19:32.763 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:19:32.763 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:19:32.763 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:19:32.763 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:19:32.763 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:32.763 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:19:32.763 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:19:32.763 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:32.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.764 --rc genhtml_branch_coverage=1 00:19:32.764 --rc genhtml_function_coverage=1 00:19:32.764 --rc genhtml_legend=1 00:19:32.764 --rc geninfo_all_blocks=1 00:19:32.764 --rc geninfo_unexecuted_blocks=1 00:19:32.764 00:19:32.764 ' 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:32.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.764 --rc genhtml_branch_coverage=1 00:19:32.764 --rc genhtml_function_coverage=1 00:19:32.764 --rc genhtml_legend=1 00:19:32.764 --rc geninfo_all_blocks=1 00:19:32.764 --rc geninfo_unexecuted_blocks=1 00:19:32.764 00:19:32.764 ' 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:32.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.764 --rc genhtml_branch_coverage=1 00:19:32.764 --rc genhtml_function_coverage=1 00:19:32.764 --rc genhtml_legend=1 00:19:32.764 --rc geninfo_all_blocks=1 00:19:32.764 --rc geninfo_unexecuted_blocks=1 00:19:32.764 00:19:32.764 ' 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:32.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.764 --rc genhtml_branch_coverage=1 00:19:32.764 --rc genhtml_function_coverage=1 00:19:32.764 --rc genhtml_legend=1 00:19:32.764 --rc geninfo_all_blocks=1 00:19:32.764 --rc geninfo_unexecuted_blocks=1 00:19:32.764 00:19:32.764 ' 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:32.764 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:32.764 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:32.765 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:32.765 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:32.765 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:32.765 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:32.765 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:32.765 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:32.765 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:32.765 Cannot find device "nvmf_init_br" 00:19:32.765 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:19:32.765 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:32.765 Cannot find device "nvmf_init_br2" 00:19:32.765 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:19:32.765 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:33.024 Cannot find device "nvmf_tgt_br" 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:33.024 Cannot find device "nvmf_tgt_br2" 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:33.024 Cannot find device "nvmf_init_br" 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:33.024 Cannot find device "nvmf_init_br2" 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:33.024 Cannot find device "nvmf_tgt_br" 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:33.024 Cannot find device "nvmf_tgt_br2" 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:33.024 Cannot find device "nvmf_br" 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:33.024 Cannot find device "nvmf_init_if" 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:33.024 Cannot find device "nvmf_init_if2" 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:33.024 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:33.024 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:33.024 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:33.283 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:33.283 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:33.283 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:33.283 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:33.284 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:33.284 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:19:33.284 00:19:33.284 --- 10.0.0.3 ping statistics --- 00:19:33.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.284 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:33.284 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:33.284 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:19:33.284 00:19:33.284 --- 10.0.0.4 ping statistics --- 00:19:33.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.284 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:33.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:33.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:19:33.284 00:19:33.284 --- 10.0.0.1 ping statistics --- 00:19:33.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.284 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:33.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:33.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:19:33.284 00:19:33.284 --- 10.0.0.2 ping statistics --- 00:19:33.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.284 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:33.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74551 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74551 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 74551 ']' 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:33.284 05:29:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:33.284 [2024-11-20 05:29:47.713085] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:19:33.284 [2024-11-20 05:29:47.713170] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.543 [2024-11-20 05:29:47.875729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:33.543 [2024-11-20 05:29:47.912820] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:33.543 [2024-11-20 05:29:47.913142] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:33.543 [2024-11-20 05:29:47.913283] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:33.543 [2024-11-20 05:29:47.913400] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:33.543 [2024-11-20 05:29:47.913557] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:33.543 [2024-11-20 05:29:47.914437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.543 [2024-11-20 05:29:47.914534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.543 [2024-11-20 05:29:47.914614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:33.543 [2024-11-20 05:29:47.914627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.543 [2024-11-20 05:29:47.964566] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:33.543 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:33.543 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:19:33.543 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:33.543 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.543 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:33.543 [2024-11-20 05:29:48.033287] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:33.543 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.543 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:19:33.543 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:33.543 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:33.802 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:33.802 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.802 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:33.802 Malloc0 00:19:33.802 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.802 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:33.802 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.802 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:33.802 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.802 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:19:33.802 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.802 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:33.802 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.802 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:33.802 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.802 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:33.802 [2024-11-20 05:29:48.129825] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:33.802 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.802 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:19:33.802 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.802 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:33.802 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.802 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:19:33.802 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.802 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:33.802 [ 00:19:33.802 { 00:19:33.802 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:33.802 "subtype": "Discovery", 00:19:33.802 "listen_addresses": [ 00:19:33.802 { 00:19:33.802 "trtype": "TCP", 00:19:33.802 "adrfam": "IPv4", 00:19:33.802 "traddr": "10.0.0.3", 00:19:33.802 "trsvcid": "4420" 00:19:33.802 } 00:19:33.802 ], 00:19:33.802 "allow_any_host": true, 00:19:33.802 "hosts": [] 00:19:33.802 }, 00:19:33.802 { 00:19:33.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.802 "subtype": "NVMe", 00:19:33.802 "listen_addresses": [ 00:19:33.802 { 00:19:33.802 "trtype": "TCP", 00:19:33.802 "adrfam": "IPv4", 00:19:33.802 "traddr": "10.0.0.3", 00:19:33.802 "trsvcid": "4420" 00:19:33.802 } 00:19:33.802 ], 00:19:33.802 "allow_any_host": true, 00:19:33.802 "hosts": [], 00:19:33.802 "serial_number": "SPDK00000000000001", 00:19:33.802 "model_number": "SPDK bdev Controller", 00:19:33.802 "max_namespaces": 32, 00:19:33.802 "min_cntlid": 1, 00:19:33.802 "max_cntlid": 65519, 00:19:33.802 "namespaces": [ 00:19:33.802 { 00:19:33.802 "nsid": 1, 00:19:33.802 "bdev_name": "Malloc0", 00:19:33.802 "name": "Malloc0", 00:19:33.802 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:19:33.802 "eui64": "ABCDEF0123456789", 00:19:33.802 "uuid": "ece2a316-a970-4133-8fb6-73cd94fed7fa" 00:19:33.802 } 00:19:33.802 ] 00:19:33.802 } 00:19:33.802 ] 00:19:33.802 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.802 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:19:33.802 [2024-11-20 05:29:48.184219] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:19:33.802 [2024-11-20 05:29:48.184287] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74577 ] 00:19:34.063 [2024-11-20 05:29:48.450811] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:19:34.063 [2024-11-20 05:29:48.450957] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:34.063 [2024-11-20 05:29:48.450969] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:34.063 [2024-11-20 05:29:48.450997] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:34.063 [2024-11-20 05:29:48.451013] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:34.063 [2024-11-20 05:29:48.451485] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:19:34.063 [2024-11-20 05:29:48.451584] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x207a750 0 00:19:34.063 [2024-11-20 05:29:48.465967] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:34.063 [2024-11-20 05:29:48.466033] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:34.063 [2024-11-20 05:29:48.466045] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:34.063 [2024-11-20 05:29:48.466052] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:34.063 [2024-11-20 05:29:48.466118] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.063 [2024-11-20 05:29:48.466131] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.063 [2024-11-20 05:29:48.466140] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x207a750) 00:19:34.063 [2024-11-20 05:29:48.466169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:34.063 [2024-11-20 05:29:48.466240] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20de740, cid 0, qid 0 00:19:34.063 [2024-11-20 05:29:48.473964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.063 [2024-11-20 05:29:48.474050] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.063 [2024-11-20 05:29:48.474064] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.063 [2024-11-20 05:29:48.474074] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20de740) on tqpair=0x207a750 00:19:34.063 [2024-11-20 05:29:48.474105] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:34.063 [2024-11-20 05:29:48.474126] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:19:34.063 [2024-11-20 05:29:48.474139] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:19:34.063 [2024-11-20 05:29:48.474175] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.063 [2024-11-20 05:29:48.474185] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.063 [2024-11-20 05:29:48.474192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x207a750) 00:19:34.063 [2024-11-20 05:29:48.474217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.063 [2024-11-20 05:29:48.474276] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20de740, cid 0, qid 0 00:19:34.063 [2024-11-20 05:29:48.474419] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.063 [2024-11-20 05:29:48.474438] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.063 [2024-11-20 05:29:48.474447] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.063 [2024-11-20 05:29:48.474454] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20de740) on tqpair=0x207a750 00:19:34.063 [2024-11-20 05:29:48.474467] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:19:34.063 [2024-11-20 05:29:48.474482] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:19:34.063 [2024-11-20 05:29:48.474497] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.063 [2024-11-20 05:29:48.474505] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.063 [2024-11-20 05:29:48.474512] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x207a750) 00:19:34.063 [2024-11-20 05:29:48.474529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.063 [2024-11-20 05:29:48.474566] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20de740, cid 0, qid 0 00:19:34.063 [2024-11-20 05:29:48.474656] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.063 [2024-11-20 05:29:48.474675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.063 [2024-11-20 05:29:48.474683] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.063 [2024-11-20 05:29:48.474691] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20de740) on tqpair=0x207a750 00:19:34.064 [2024-11-20 05:29:48.474703] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:19:34.064 [2024-11-20 05:29:48.474720] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:19:34.064 [2024-11-20 05:29:48.474734] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.064 [2024-11-20 05:29:48.474742] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.064 [2024-11-20 05:29:48.474749] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x207a750) 00:19:34.064 [2024-11-20 05:29:48.474766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.064 [2024-11-20 05:29:48.474802] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20de740, cid 0, qid 0 00:19:34.064 [2024-11-20 05:29:48.474882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.064 [2024-11-20 05:29:48.474919] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.064 [2024-11-20 05:29:48.474930] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.064 [2024-11-20 05:29:48.474938] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20de740) on tqpair=0x207a750 00:19:34.064 [2024-11-20 05:29:48.474950] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:34.064 [2024-11-20 05:29:48.474973] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.064 [2024-11-20 05:29:48.474983] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.064 [2024-11-20 05:29:48.474990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x207a750) 00:19:34.064 [2024-11-20 05:29:48.475006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.064 [2024-11-20 05:29:48.475042] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20de740, cid 0, qid 0 00:19:34.064 [2024-11-20 05:29:48.475143] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.064 [2024-11-20 05:29:48.475174] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.064 [2024-11-20 05:29:48.475183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.064 [2024-11-20 05:29:48.475191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20de740) on tqpair=0x207a750 00:19:34.064 [2024-11-20 05:29:48.475201] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:19:34.064 [2024-11-20 05:29:48.475212] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:19:34.064 [2024-11-20 05:29:48.475228] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:34.064 [2024-11-20 05:29:48.475348] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:19:34.064 [2024-11-20 05:29:48.475359] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:34.064 [2024-11-20 05:29:48.475377] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.064 [2024-11-20 05:29:48.475387] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.064 [2024-11-20 05:29:48.475393] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x207a750) 00:19:34.064 [2024-11-20 05:29:48.475408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.064 [2024-11-20 05:29:48.475443] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20de740, cid 0, qid 0 00:19:34.064 [2024-11-20 05:29:48.475533] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.064 [2024-11-20 05:29:48.475558] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.064 [2024-11-20 05:29:48.475566] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.064 [2024-11-20 05:29:48.475574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20de740) on tqpair=0x207a750 00:19:34.064 [2024-11-20 05:29:48.475585] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:34.064 [2024-11-20 05:29:48.475606] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.064 [2024-11-20 05:29:48.475614] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.064 [2024-11-20 05:29:48.475621] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x207a750) 00:19:34.064 [2024-11-20 05:29:48.475637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.064 [2024-11-20 05:29:48.475674] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20de740, cid 0, qid 0 00:19:34.064 [2024-11-20 05:29:48.475770] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.064 [2024-11-20 05:29:48.475789] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.064 [2024-11-20 05:29:48.475797] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.064 [2024-11-20 05:29:48.475805] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20de740) on tqpair=0x207a750 00:19:34.064 [2024-11-20 05:29:48.475814] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:34.064 [2024-11-20 05:29:48.475824] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:19:34.064 [2024-11-20 05:29:48.475839] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:19:34.064 [2024-11-20 05:29:48.475870] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:19:34.064 [2024-11-20 05:29:48.475892] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.064 [2024-11-20 05:29:48.475900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x207a750) 00:19:34.064 [2024-11-20 05:29:48.475935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.064 [2024-11-20 05:29:48.475975] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20de740, cid 0, qid 0 00:19:34.064 [2024-11-20 05:29:48.476143] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:34.064 [2024-11-20 05:29:48.476182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:34.064 [2024-11-20 05:29:48.476192] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:34.064 [2024-11-20 05:29:48.476200] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x207a750): datao=0, datal=4096, cccid=0 00:19:34.064 [2024-11-20 05:29:48.476209] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20de740) on tqpair(0x207a750): expected_datao=0, payload_size=4096 00:19:34.064 [2024-11-20 05:29:48.476217] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.064 [2024-11-20 05:29:48.476236] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:34.064 [2024-11-20 05:29:48.476244] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:34.064 [2024-11-20 05:29:48.476263] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.064 [2024-11-20 05:29:48.476275] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.064 [2024-11-20 05:29:48.476282] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.064 [2024-11-20 05:29:48.476290] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20de740) on tqpair=0x207a750 00:19:34.064 [2024-11-20 05:29:48.476307] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:19:34.064 [2024-11-20 05:29:48.476317] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:19:34.064 [2024-11-20 05:29:48.476325] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:19:34.064 [2024-11-20 05:29:48.476335] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:19:34.064 [2024-11-20 05:29:48.476344] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:19:34.064 [2024-11-20 05:29:48.476353] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:19:34.064 [2024-11-20 05:29:48.476377] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:19:34.064 [2024-11-20 05:29:48.476392] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.064 [2024-11-20 05:29:48.476402] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.064 [2024-11-20 05:29:48.476409] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x207a750) 00:19:34.064 [2024-11-20 05:29:48.476424] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:34.064 [2024-11-20 05:29:48.476462] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20de740, cid 0, qid 0 00:19:34.064 [2024-11-20 05:29:48.476569] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.064 [2024-11-20 05:29:48.476602] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.064 [2024-11-20 05:29:48.476612] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.064 [2024-11-20 05:29:48.476620] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20de740) on tqpair=0x207a750 00:19:34.064 [2024-11-20 05:29:48.476633] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.064 [2024-11-20 05:29:48.476641] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.064 [2024-11-20 05:29:48.476648] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x207a750) 00:19:34.064 [2024-11-20 05:29:48.476661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.064 [2024-11-20 05:29:48.476676] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.064 [2024-11-20 05:29:48.476683] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.064 [2024-11-20 05:29:48.476689] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x207a750) 00:19:34.064 [2024-11-20 05:29:48.476703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.064 [2024-11-20 05:29:48.476716] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.064 [2024-11-20 05:29:48.476724] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.064 [2024-11-20 05:29:48.476731] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x207a750) 00:19:34.064 [2024-11-20 05:29:48.476745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.064 [2024-11-20 05:29:48.476757] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.064 [2024-11-20 05:29:48.476765] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.064 [2024-11-20 05:29:48.476772] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207a750) 00:19:34.065 [2024-11-20 05:29:48.476784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.065 [2024-11-20 05:29:48.476793] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:34.065 [2024-11-20 05:29:48.476819] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:34.065 [2024-11-20 05:29:48.476834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.065 [2024-11-20 05:29:48.476841] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x207a750) 00:19:34.065 [2024-11-20 05:29:48.476857] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.065 [2024-11-20 05:29:48.476896] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20de740, cid 0, qid 0 00:19:34.065 [2024-11-20 05:29:48.476927] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20de8c0, cid 1, qid 0 00:19:34.065 [2024-11-20 05:29:48.476938] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20dea40, cid 2, qid 0 00:19:34.065 [2024-11-20 05:29:48.476947] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20debc0, cid 3, qid 0 00:19:34.065 [2024-11-20 05:29:48.476956] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ded40, cid 4, qid 0 00:19:34.065 [2024-11-20 05:29:48.477125] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.065 [2024-11-20 05:29:48.477157] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.065 [2024-11-20 05:29:48.477166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.065 [2024-11-20 05:29:48.477175] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ded40) on tqpair=0x207a750 00:19:34.065 [2024-11-20 05:29:48.477185] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:19:34.065 [2024-11-20 05:29:48.477195] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:19:34.065 [2024-11-20 05:29:48.477219] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.065 [2024-11-20 05:29:48.477229] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x207a750) 00:19:34.065 [2024-11-20 05:29:48.477245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.065 [2024-11-20 05:29:48.477281] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ded40, cid 4, qid 0 00:19:34.065 [2024-11-20 05:29:48.477398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:34.065 [2024-11-20 05:29:48.477427] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:34.065 [2024-11-20 05:29:48.477437] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:34.065 [2024-11-20 05:29:48.477444] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x207a750): datao=0, datal=4096, cccid=4 00:19:34.065 [2024-11-20 05:29:48.477451] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20ded40) on tqpair(0x207a750): expected_datao=0, payload_size=4096 00:19:34.065 [2024-11-20 05:29:48.477460] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.065 [2024-11-20 05:29:48.477476] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:34.065 [2024-11-20 05:29:48.477484] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:34.065 [2024-11-20 05:29:48.477501] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.065 [2024-11-20 05:29:48.477515] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.065 [2024-11-20 05:29:48.477522] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.065 [2024-11-20 05:29:48.477529] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ded40) on tqpair=0x207a750 00:19:34.065 [2024-11-20 05:29:48.477553] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:19:34.065 [2024-11-20 05:29:48.477612] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.065 [2024-11-20 05:29:48.477631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x207a750) 00:19:34.065 [2024-11-20 05:29:48.477648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.065 [2024-11-20 05:29:48.477663] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.065 [2024-11-20 05:29:48.477672] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.065 [2024-11-20 05:29:48.477678] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x207a750) 00:19:34.065 [2024-11-20 05:29:48.477691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.065 [2024-11-20 05:29:48.477737] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ded40, cid 4, qid 0 00:19:34.065 [2024-11-20 05:29:48.477751] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20deec0, cid 5, qid 0 00:19:34.065 [2024-11-20 05:29:48.482001] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:34.065 [2024-11-20 05:29:48.482063] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:34.065 [2024-11-20 05:29:48.482075] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:34.065 [2024-11-20 05:29:48.482083] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x207a750): datao=0, datal=1024, cccid=4 00:19:34.065 [2024-11-20 05:29:48.482092] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20ded40) on tqpair(0x207a750): expected_datao=0, payload_size=1024 00:19:34.065 [2024-11-20 05:29:48.482101] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.065 [2024-11-20 05:29:48.482120] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:34.065 [2024-11-20 05:29:48.482129] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:34.065 [2024-11-20 05:29:48.482140] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.065 [2024-11-20 05:29:48.482151] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.065 [2024-11-20 05:29:48.482159] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.065 [2024-11-20 05:29:48.482167] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20deec0) on tqpair=0x207a750 00:19:34.065 [2024-11-20 05:29:48.482180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.065 [2024-11-20 05:29:48.482195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.065 [2024-11-20 05:29:48.482201] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.065 [2024-11-20 05:29:48.482208] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ded40) on tqpair=0x207a750 00:19:34.065 [2024-11-20 05:29:48.482244] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.065 [2024-11-20 05:29:48.482253] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x207a750) 00:19:34.065 [2024-11-20 05:29:48.482276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.065 [2024-11-20 05:29:48.482330] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ded40, cid 4, qid 0 00:19:34.065 [2024-11-20 05:29:48.482515] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:34.065 [2024-11-20 05:29:48.482547] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:34.065 [2024-11-20 05:29:48.482557] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:34.065 [2024-11-20 05:29:48.482564] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x207a750): datao=0, datal=3072, cccid=4 00:19:34.065 [2024-11-20 05:29:48.482572] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20ded40) on tqpair(0x207a750): expected_datao=0, payload_size=3072 00:19:34.065 [2024-11-20 05:29:48.482580] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.065 [2024-11-20 05:29:48.482596] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:34.065 [2024-11-20 05:29:48.482604] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:34.065 [2024-11-20 05:29:48.482621] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.065 [2024-11-20 05:29:48.482634] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.065 [2024-11-20 05:29:48.482641] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.065 [2024-11-20 05:29:48.482648] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ded40) on tqpair=0x207a750 00:19:34.065 [2024-11-20 05:29:48.482674] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.065 [2024-11-20 05:29:48.482683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x207a750) 00:19:34.065 [2024-11-20 05:29:48.482698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.065 [2024-11-20 05:29:48.482743] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ded40, cid 4, qid 0 00:19:34.065 [2024-11-20 05:29:48.482864] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:34.065 [2024-11-20 05:29:48.482892] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:34.065 [2024-11-20 05:29:48.482918] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:34.065 [2024-11-20 05:29:48.482928] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x207a750): datao=0, datal=8, cccid=4 00:19:34.065 [2024-11-20 05:29:48.482936] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20ded40) on tqpair(0x207a750): expected_datao=0, payload_size=8 00:19:34.065 [2024-11-20 05:29:48.482944] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.065 [2024-11-20 05:29:48.482959] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:34.065 [2024-11-20 05:29:48.482966] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:34.065 [2024-11-20 05:29:48.482999] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.065 [2024-11-20 05:29:48.483020] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.065 [2024-11-20 05:29:48.483027] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.065 [2024-11-20 05:29:48.483037] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ded40) on tqpair=0x207a750 00:19:34.065 ===================================================== 00:19:34.065 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:34.065 ===================================================== 00:19:34.065 Controller Capabilities/Features 00:19:34.065 ================================ 00:19:34.065 Vendor ID: 0000 00:19:34.065 Subsystem Vendor ID: 0000 00:19:34.065 Serial Number: .................... 00:19:34.065 Model Number: ........................................ 00:19:34.065 Firmware Version: 25.01 00:19:34.065 Recommended Arb Burst: 0 00:19:34.065 IEEE OUI Identifier: 00 00 00 00:19:34.065 Multi-path I/O 00:19:34.065 May have multiple subsystem ports: No 00:19:34.065 May have multiple controllers: No 00:19:34.065 Associated with SR-IOV VF: No 00:19:34.065 Max Data Transfer Size: 131072 00:19:34.065 Max Number of Namespaces: 0 00:19:34.065 Max Number of I/O Queues: 1024 00:19:34.066 NVMe Specification Version (VS): 1.3 00:19:34.066 NVMe Specification Version (Identify): 1.3 00:19:34.066 Maximum Queue Entries: 128 00:19:34.066 Contiguous Queues Required: Yes 00:19:34.066 Arbitration Mechanisms Supported 00:19:34.066 Weighted Round Robin: Not Supported 00:19:34.066 Vendor Specific: Not Supported 00:19:34.066 Reset Timeout: 15000 ms 00:19:34.066 Doorbell Stride: 4 bytes 00:19:34.066 NVM Subsystem Reset: Not Supported 00:19:34.066 Command Sets Supported 00:19:34.066 NVM Command Set: Supported 00:19:34.066 Boot Partition: Not Supported 00:19:34.066 Memory Page Size Minimum: 4096 bytes 00:19:34.066 Memory Page Size Maximum: 4096 bytes 00:19:34.066 Persistent Memory Region: Not Supported 00:19:34.066 Optional Asynchronous Events Supported 00:19:34.066 Namespace Attribute Notices: Not Supported 00:19:34.066 Firmware Activation Notices: Not Supported 00:19:34.066 ANA Change Notices: Not Supported 00:19:34.066 PLE Aggregate Log Change Notices: Not Supported 00:19:34.066 LBA Status Info Alert Notices: Not Supported 00:19:34.066 EGE Aggregate Log Change Notices: Not Supported 00:19:34.066 Normal NVM Subsystem Shutdown event: Not Supported 00:19:34.066 Zone Descriptor Change Notices: Not Supported 00:19:34.066 Discovery Log Change Notices: Supported 00:19:34.066 Controller Attributes 00:19:34.066 128-bit Host Identifier: Not Supported 00:19:34.066 Non-Operational Permissive Mode: Not Supported 00:19:34.066 NVM Sets: Not Supported 00:19:34.066 Read Recovery Levels: Not Supported 00:19:34.066 Endurance Groups: Not Supported 00:19:34.066 Predictable Latency Mode: Not Supported 00:19:34.066 Traffic Based Keep ALive: Not Supported 00:19:34.066 Namespace Granularity: Not Supported 00:19:34.066 SQ Associations: Not Supported 00:19:34.066 UUID List: Not Supported 00:19:34.066 Multi-Domain Subsystem: Not Supported 00:19:34.066 Fixed Capacity Management: Not Supported 00:19:34.066 Variable Capacity Management: Not Supported 00:19:34.066 Delete Endurance Group: Not Supported 00:19:34.066 Delete NVM Set: Not Supported 00:19:34.066 Extended LBA Formats Supported: Not Supported 00:19:34.066 Flexible Data Placement Supported: Not Supported 00:19:34.066 00:19:34.066 Controller Memory Buffer Support 00:19:34.066 ================================ 00:19:34.066 Supported: No 00:19:34.066 00:19:34.066 Persistent Memory Region Support 00:19:34.066 ================================ 00:19:34.066 Supported: No 00:19:34.066 00:19:34.066 Admin Command Set Attributes 00:19:34.066 ============================ 00:19:34.066 Security Send/Receive: Not Supported 00:19:34.066 Format NVM: Not Supported 00:19:34.066 Firmware Activate/Download: Not Supported 00:19:34.066 Namespace Management: Not Supported 00:19:34.066 Device Self-Test: Not Supported 00:19:34.066 Directives: Not Supported 00:19:34.066 NVMe-MI: Not Supported 00:19:34.066 Virtualization Management: Not Supported 00:19:34.066 Doorbell Buffer Config: Not Supported 00:19:34.066 Get LBA Status Capability: Not Supported 00:19:34.066 Command & Feature Lockdown Capability: Not Supported 00:19:34.066 Abort Command Limit: 1 00:19:34.066 Async Event Request Limit: 4 00:19:34.066 Number of Firmware Slots: N/A 00:19:34.066 Firmware Slot 1 Read-Only: N/A 00:19:34.066 Firmware Activation Without Reset: N/A 00:19:34.066 Multiple Update Detection Support: N/A 00:19:34.066 Firmware Update Granularity: No Information Provided 00:19:34.066 Per-Namespace SMART Log: No 00:19:34.066 Asymmetric Namespace Access Log Page: Not Supported 00:19:34.066 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:34.066 Command Effects Log Page: Not Supported 00:19:34.066 Get Log Page Extended Data: Supported 00:19:34.066 Telemetry Log Pages: Not Supported 00:19:34.066 Persistent Event Log Pages: Not Supported 00:19:34.066 Supported Log Pages Log Page: May Support 00:19:34.066 Commands Supported & Effects Log Page: Not Supported 00:19:34.066 Feature Identifiers & Effects Log Page:May Support 00:19:34.066 NVMe-MI Commands & Effects Log Page: May Support 00:19:34.066 Data Area 4 for Telemetry Log: Not Supported 00:19:34.066 Error Log Page Entries Supported: 128 00:19:34.066 Keep Alive: Not Supported 00:19:34.066 00:19:34.066 NVM Command Set Attributes 00:19:34.066 ========================== 00:19:34.066 Submission Queue Entry Size 00:19:34.066 Max: 1 00:19:34.066 Min: 1 00:19:34.066 Completion Queue Entry Size 00:19:34.066 Max: 1 00:19:34.066 Min: 1 00:19:34.066 Number of Namespaces: 0 00:19:34.066 Compare Command: Not Supported 00:19:34.066 Write Uncorrectable Command: Not Supported 00:19:34.066 Dataset Management Command: Not Supported 00:19:34.066 Write Zeroes Command: Not Supported 00:19:34.066 Set Features Save Field: Not Supported 00:19:34.066 Reservations: Not Supported 00:19:34.066 Timestamp: Not Supported 00:19:34.066 Copy: Not Supported 00:19:34.066 Volatile Write Cache: Not Present 00:19:34.066 Atomic Write Unit (Normal): 1 00:19:34.066 Atomic Write Unit (PFail): 1 00:19:34.066 Atomic Compare & Write Unit: 1 00:19:34.066 Fused Compare & Write: Supported 00:19:34.066 Scatter-Gather List 00:19:34.066 SGL Command Set: Supported 00:19:34.066 SGL Keyed: Supported 00:19:34.066 SGL Bit Bucket Descriptor: Not Supported 00:19:34.066 SGL Metadata Pointer: Not Supported 00:19:34.066 Oversized SGL: Not Supported 00:19:34.066 SGL Metadata Address: Not Supported 00:19:34.066 SGL Offset: Supported 00:19:34.066 Transport SGL Data Block: Not Supported 00:19:34.066 Replay Protected Memory Block: Not Supported 00:19:34.066 00:19:34.066 Firmware Slot Information 00:19:34.066 ========================= 00:19:34.066 Active slot: 0 00:19:34.066 00:19:34.066 00:19:34.066 Error Log 00:19:34.066 ========= 00:19:34.066 00:19:34.066 Active Namespaces 00:19:34.066 ================= 00:19:34.066 Discovery Log Page 00:19:34.066 ================== 00:19:34.066 Generation Counter: 2 00:19:34.066 Number of Records: 2 00:19:34.066 Record Format: 0 00:19:34.066 00:19:34.066 Discovery Log Entry 0 00:19:34.066 ---------------------- 00:19:34.066 Transport Type: 3 (TCP) 00:19:34.066 Address Family: 1 (IPv4) 00:19:34.066 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:34.066 Entry Flags: 00:19:34.066 Duplicate Returned Information: 1 00:19:34.066 Explicit Persistent Connection Support for Discovery: 1 00:19:34.066 Transport Requirements: 00:19:34.066 Secure Channel: Not Required 00:19:34.066 Port ID: 0 (0x0000) 00:19:34.066 Controller ID: 65535 (0xffff) 00:19:34.066 Admin Max SQ Size: 128 00:19:34.066 Transport Service Identifier: 4420 00:19:34.066 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:34.066 Transport Address: 10.0.0.3 00:19:34.066 Discovery Log Entry 1 00:19:34.066 ---------------------- 00:19:34.066 Transport Type: 3 (TCP) 00:19:34.066 Address Family: 1 (IPv4) 00:19:34.066 Subsystem Type: 2 (NVM Subsystem) 00:19:34.066 Entry Flags: 00:19:34.066 Duplicate Returned Information: 0 00:19:34.066 Explicit Persistent Connection Support for Discovery: 0 00:19:34.066 Transport Requirements: 00:19:34.066 Secure Channel: Not Required 00:19:34.066 Port ID: 0 (0x0000) 00:19:34.066 Controller ID: 65535 (0xffff) 00:19:34.066 Admin Max SQ Size: 128 00:19:34.066 Transport Service Identifier: 4420 00:19:34.066 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:19:34.066 Transport Address: 10.0.0.3 [2024-11-20 05:29:48.483214] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:19:34.066 [2024-11-20 05:29:48.483238] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20de740) on tqpair=0x207a750 00:19:34.066 [2024-11-20 05:29:48.483251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.066 [2024-11-20 05:29:48.483261] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20de8c0) on tqpair=0x207a750 00:19:34.066 [2024-11-20 05:29:48.483272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.066 [2024-11-20 05:29:48.483284] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20dea40) on tqpair=0x207a750 00:19:34.066 [2024-11-20 05:29:48.483294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.066 [2024-11-20 05:29:48.483304] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20debc0) on tqpair=0x207a750 00:19:34.066 [2024-11-20 05:29:48.483314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.066 [2024-11-20 05:29:48.483334] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.066 [2024-11-20 05:29:48.483343] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.066 [2024-11-20 05:29:48.483350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207a750) 00:19:34.067 [2024-11-20 05:29:48.483365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.067 [2024-11-20 05:29:48.483402] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20debc0, cid 3, qid 0 00:19:34.067 [2024-11-20 05:29:48.483496] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.067 [2024-11-20 05:29:48.483517] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.067 [2024-11-20 05:29:48.483525] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.483532] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20debc0) on tqpair=0x207a750 00:19:34.067 [2024-11-20 05:29:48.483548] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.483556] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.483563] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207a750) 00:19:34.067 [2024-11-20 05:29:48.483579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.067 [2024-11-20 05:29:48.483621] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20debc0, cid 3, qid 0 00:19:34.067 [2024-11-20 05:29:48.483743] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.067 [2024-11-20 05:29:48.483780] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.067 [2024-11-20 05:29:48.483788] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.483796] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20debc0) on tqpair=0x207a750 00:19:34.067 [2024-11-20 05:29:48.483806] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:19:34.067 [2024-11-20 05:29:48.483816] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:19:34.067 [2024-11-20 05:29:48.483837] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.483847] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.483854] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207a750) 00:19:34.067 [2024-11-20 05:29:48.483870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.067 [2024-11-20 05:29:48.483934] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20debc0, cid 3, qid 0 00:19:34.067 [2024-11-20 05:29:48.484020] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.067 [2024-11-20 05:29:48.484040] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.067 [2024-11-20 05:29:48.484049] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.484057] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20debc0) on tqpair=0x207a750 00:19:34.067 [2024-11-20 05:29:48.484080] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.484090] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.484097] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207a750) 00:19:34.067 [2024-11-20 05:29:48.484113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.067 [2024-11-20 05:29:48.484151] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20debc0, cid 3, qid 0 00:19:34.067 [2024-11-20 05:29:48.484232] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.067 [2024-11-20 05:29:48.484251] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.067 [2024-11-20 05:29:48.484259] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.484267] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20debc0) on tqpair=0x207a750 00:19:34.067 [2024-11-20 05:29:48.484291] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.484307] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.484315] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207a750) 00:19:34.067 [2024-11-20 05:29:48.484331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.067 [2024-11-20 05:29:48.484366] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20debc0, cid 3, qid 0 00:19:34.067 [2024-11-20 05:29:48.484446] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.067 [2024-11-20 05:29:48.484464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.067 [2024-11-20 05:29:48.484473] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.484480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20debc0) on tqpair=0x207a750 00:19:34.067 [2024-11-20 05:29:48.484504] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.484514] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.484521] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207a750) 00:19:34.067 [2024-11-20 05:29:48.484535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.067 [2024-11-20 05:29:48.484568] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20debc0, cid 3, qid 0 00:19:34.067 [2024-11-20 05:29:48.484661] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.067 [2024-11-20 05:29:48.484692] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.067 [2024-11-20 05:29:48.484702] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.484709] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20debc0) on tqpair=0x207a750 00:19:34.067 [2024-11-20 05:29:48.484733] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.484742] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.484749] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207a750) 00:19:34.067 [2024-11-20 05:29:48.484764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.067 [2024-11-20 05:29:48.484798] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20debc0, cid 3, qid 0 00:19:34.067 [2024-11-20 05:29:48.484888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.067 [2024-11-20 05:29:48.484923] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.067 [2024-11-20 05:29:48.484933] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.484941] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20debc0) on tqpair=0x207a750 00:19:34.067 [2024-11-20 05:29:48.484963] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.484972] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.484979] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207a750) 00:19:34.067 [2024-11-20 05:29:48.484993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.067 [2024-11-20 05:29:48.485028] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20debc0, cid 3, qid 0 00:19:34.067 [2024-11-20 05:29:48.485115] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.067 [2024-11-20 05:29:48.485138] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.067 [2024-11-20 05:29:48.485146] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.485156] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20debc0) on tqpair=0x207a750 00:19:34.067 [2024-11-20 05:29:48.485178] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.485187] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.485194] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207a750) 00:19:34.067 [2024-11-20 05:29:48.485210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.067 [2024-11-20 05:29:48.485245] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20debc0, cid 3, qid 0 00:19:34.067 [2024-11-20 05:29:48.485325] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.067 [2024-11-20 05:29:48.485348] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.067 [2024-11-20 05:29:48.485357] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.485365] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20debc0) on tqpair=0x207a750 00:19:34.067 [2024-11-20 05:29:48.485388] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.485396] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.485404] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207a750) 00:19:34.067 [2024-11-20 05:29:48.485420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.067 [2024-11-20 05:29:48.485455] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20debc0, cid 3, qid 0 00:19:34.067 [2024-11-20 05:29:48.485534] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.067 [2024-11-20 05:29:48.485557] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.067 [2024-11-20 05:29:48.485566] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.485575] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20debc0) on tqpair=0x207a750 00:19:34.067 [2024-11-20 05:29:48.485596] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.485604] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.485612] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207a750) 00:19:34.067 [2024-11-20 05:29:48.485625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.067 [2024-11-20 05:29:48.485658] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20debc0, cid 3, qid 0 00:19:34.067 [2024-11-20 05:29:48.485747] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.067 [2024-11-20 05:29:48.485766] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.067 [2024-11-20 05:29:48.485774] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.485781] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20debc0) on tqpair=0x207a750 00:19:34.067 [2024-11-20 05:29:48.485805] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.485814] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.067 [2024-11-20 05:29:48.485821] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207a750) 00:19:34.067 [2024-11-20 05:29:48.485838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.067 [2024-11-20 05:29:48.485871] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20debc0, cid 3, qid 0 00:19:34.067 [2024-11-20 05:29:48.489954] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.067 [2024-11-20 05:29:48.490011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.067 [2024-11-20 05:29:48.490022] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.068 [2024-11-20 05:29:48.490030] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20debc0) on tqpair=0x207a750 00:19:34.068 [2024-11-20 05:29:48.490064] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.068 [2024-11-20 05:29:48.490075] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.068 [2024-11-20 05:29:48.490082] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207a750) 00:19:34.068 [2024-11-20 05:29:48.490102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.068 [2024-11-20 05:29:48.490157] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20debc0, cid 3, qid 0 00:19:34.068 [2024-11-20 05:29:48.490269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.068 [2024-11-20 05:29:48.490285] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.068 [2024-11-20 05:29:48.490292] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.068 [2024-11-20 05:29:48.490300] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20debc0) on tqpair=0x207a750 00:19:34.068 [2024-11-20 05:29:48.490319] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:19:34.068 00:19:34.068 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:19:34.068 [2024-11-20 05:29:48.559347] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:19:34.068 [2024-11-20 05:29:48.559665] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74586 ] 00:19:34.344 [2024-11-20 05:29:48.727124] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:19:34.344 [2024-11-20 05:29:48.727208] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:34.344 [2024-11-20 05:29:48.727216] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:34.344 [2024-11-20 05:29:48.727234] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:34.344 [2024-11-20 05:29:48.727246] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:34.344 [2024-11-20 05:29:48.727617] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:19:34.344 [2024-11-20 05:29:48.727695] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2049750 0 00:19:34.344 [2024-11-20 05:29:48.732951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:34.344 [2024-11-20 05:29:48.732988] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:34.344 [2024-11-20 05:29:48.732995] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:34.344 [2024-11-20 05:29:48.732999] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:34.344 [2024-11-20 05:29:48.733034] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.344 [2024-11-20 05:29:48.733041] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.344 [2024-11-20 05:29:48.733045] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2049750) 00:19:34.344 [2024-11-20 05:29:48.733066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:34.344 [2024-11-20 05:29:48.733115] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ad740, cid 0, qid 0 00:19:34.344 [2024-11-20 05:29:48.739050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.344 [2024-11-20 05:29:48.739086] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.344 [2024-11-20 05:29:48.739093] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.344 [2024-11-20 05:29:48.739099] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ad740) on tqpair=0x2049750 00:19:34.344 [2024-11-20 05:29:48.739115] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:34.344 [2024-11-20 05:29:48.739128] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:19:34.344 [2024-11-20 05:29:48.739137] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:19:34.344 [2024-11-20 05:29:48.739163] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.344 [2024-11-20 05:29:48.739169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.344 [2024-11-20 05:29:48.739173] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2049750) 00:19:34.344 [2024-11-20 05:29:48.739186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.345 [2024-11-20 05:29:48.739224] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ad740, cid 0, qid 0 00:19:34.345 [2024-11-20 05:29:48.741563] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.345 [2024-11-20 05:29:48.741591] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.345 [2024-11-20 05:29:48.741597] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.345 [2024-11-20 05:29:48.741602] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ad740) on tqpair=0x2049750 00:19:34.345 [2024-11-20 05:29:48.741610] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:19:34.345 [2024-11-20 05:29:48.741620] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:19:34.345 [2024-11-20 05:29:48.741630] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.345 [2024-11-20 05:29:48.741635] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.345 [2024-11-20 05:29:48.741639] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2049750) 00:19:34.345 [2024-11-20 05:29:48.741650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.345 [2024-11-20 05:29:48.741680] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ad740, cid 0, qid 0 00:19:34.345 [2024-11-20 05:29:48.741759] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.345 [2024-11-20 05:29:48.741775] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.345 [2024-11-20 05:29:48.741782] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.345 [2024-11-20 05:29:48.741788] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ad740) on tqpair=0x2049750 00:19:34.345 [2024-11-20 05:29:48.741796] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:19:34.345 [2024-11-20 05:29:48.741807] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:19:34.345 [2024-11-20 05:29:48.741816] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.345 [2024-11-20 05:29:48.741821] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.345 [2024-11-20 05:29:48.741825] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2049750) 00:19:34.345 [2024-11-20 05:29:48.741834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.345 [2024-11-20 05:29:48.741859] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ad740, cid 0, qid 0 00:19:34.345 [2024-11-20 05:29:48.741954] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.345 [2024-11-20 05:29:48.741976] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.345 [2024-11-20 05:29:48.741980] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.345 [2024-11-20 05:29:48.741984] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ad740) on tqpair=0x2049750 00:19:34.345 [2024-11-20 05:29:48.741992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:34.345 [2024-11-20 05:29:48.742004] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.345 [2024-11-20 05:29:48.742009] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.345 [2024-11-20 05:29:48.742013] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2049750) 00:19:34.345 [2024-11-20 05:29:48.742021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.345 [2024-11-20 05:29:48.742049] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ad740, cid 0, qid 0 00:19:34.345 [2024-11-20 05:29:48.742125] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.345 [2024-11-20 05:29:48.742139] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.345 [2024-11-20 05:29:48.742145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.345 [2024-11-20 05:29:48.742150] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ad740) on tqpair=0x2049750 00:19:34.345 [2024-11-20 05:29:48.742155] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:19:34.345 [2024-11-20 05:29:48.742161] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:19:34.345 [2024-11-20 05:29:48.742171] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:34.345 [2024-11-20 05:29:48.742287] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:19:34.345 [2024-11-20 05:29:48.742307] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:34.345 [2024-11-20 05:29:48.742320] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.345 [2024-11-20 05:29:48.742325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.345 [2024-11-20 05:29:48.742331] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2049750) 00:19:34.345 [2024-11-20 05:29:48.742343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.345 [2024-11-20 05:29:48.742378] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ad740, cid 0, qid 0 00:19:34.345 [2024-11-20 05:29:48.742460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.345 [2024-11-20 05:29:48.742479] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.345 [2024-11-20 05:29:48.742484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.345 [2024-11-20 05:29:48.742489] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ad740) on tqpair=0x2049750 00:19:34.345 [2024-11-20 05:29:48.742496] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:34.345 [2024-11-20 05:29:48.742508] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.345 [2024-11-20 05:29:48.742513] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.345 [2024-11-20 05:29:48.742517] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2049750) 00:19:34.345 [2024-11-20 05:29:48.742525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.345 [2024-11-20 05:29:48.742551] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ad740, cid 0, qid 0 00:19:34.345 [2024-11-20 05:29:48.742622] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.345 [2024-11-20 05:29:48.742640] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.345 [2024-11-20 05:29:48.742644] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.345 [2024-11-20 05:29:48.742649] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ad740) on tqpair=0x2049750 00:19:34.345 [2024-11-20 05:29:48.742654] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:34.345 [2024-11-20 05:29:48.742660] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:19:34.345 [2024-11-20 05:29:48.742670] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:19:34.345 [2024-11-20 05:29:48.742689] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:19:34.345 [2024-11-20 05:29:48.742705] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.345 [2024-11-20 05:29:48.742710] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2049750) 00:19:34.345 [2024-11-20 05:29:48.742721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.345 [2024-11-20 05:29:48.742756] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ad740, cid 0, qid 0 00:19:34.345 [2024-11-20 05:29:48.748936] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:34.345 [2024-11-20 05:29:48.748981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:34.345 [2024-11-20 05:29:48.748987] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:34.345 [2024-11-20 05:29:48.748992] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2049750): datao=0, datal=4096, cccid=0 00:19:34.345 [2024-11-20 05:29:48.748999] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20ad740) on tqpair(0x2049750): expected_datao=0, payload_size=4096 00:19:34.345 [2024-11-20 05:29:48.749005] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.345 [2024-11-20 05:29:48.749016] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:34.345 [2024-11-20 05:29:48.749021] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:34.345 [2024-11-20 05:29:48.749036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.345 [2024-11-20 05:29:48.749043] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.345 [2024-11-20 05:29:48.749047] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.345 [2024-11-20 05:29:48.749054] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ad740) on tqpair=0x2049750 00:19:34.345 [2024-11-20 05:29:48.749082] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:19:34.345 [2024-11-20 05:29:48.749097] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:19:34.345 [2024-11-20 05:29:48.749106] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:19:34.345 [2024-11-20 05:29:48.749113] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:19:34.345 [2024-11-20 05:29:48.749121] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:19:34.345 [2024-11-20 05:29:48.749131] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:19:34.346 [2024-11-20 05:29:48.749163] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:19:34.346 [2024-11-20 05:29:48.749182] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.346 [2024-11-20 05:29:48.749190] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.346 [2024-11-20 05:29:48.749197] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2049750) 00:19:34.346 [2024-11-20 05:29:48.749215] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:34.346 [2024-11-20 05:29:48.749271] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ad740, cid 0, qid 0 00:19:34.346 [2024-11-20 05:29:48.749398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.346 [2024-11-20 05:29:48.749414] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.346 [2024-11-20 05:29:48.749422] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.346 [2024-11-20 05:29:48.749428] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ad740) on tqpair=0x2049750 00:19:34.346 [2024-11-20 05:29:48.749438] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.346 [2024-11-20 05:29:48.749442] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.346 [2024-11-20 05:29:48.749446] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2049750) 00:19:34.346 [2024-11-20 05:29:48.749455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.346 [2024-11-20 05:29:48.749462] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.346 [2024-11-20 05:29:48.749466] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.346 [2024-11-20 05:29:48.749470] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2049750) 00:19:34.346 [2024-11-20 05:29:48.749477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.346 [2024-11-20 05:29:48.749483] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.346 [2024-11-20 05:29:48.749488] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.346 [2024-11-20 05:29:48.749492] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2049750) 00:19:34.346 [2024-11-20 05:29:48.749501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.346 [2024-11-20 05:29:48.749511] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.346 [2024-11-20 05:29:48.749518] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.346 [2024-11-20 05:29:48.749525] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2049750) 00:19:34.346 [2024-11-20 05:29:48.749535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.346 [2024-11-20 05:29:48.749545] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:34.346 [2024-11-20 05:29:48.749562] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:34.346 [2024-11-20 05:29:48.749571] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.346 [2024-11-20 05:29:48.749576] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2049750) 00:19:34.346 [2024-11-20 05:29:48.749584] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.346 [2024-11-20 05:29:48.749619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ad740, cid 0, qid 0 00:19:34.346 [2024-11-20 05:29:48.749633] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ad8c0, cid 1, qid 0 00:19:34.346 [2024-11-20 05:29:48.749642] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ada40, cid 2, qid 0 00:19:34.346 [2024-11-20 05:29:48.749650] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20adbc0, cid 3, qid 0 00:19:34.346 [2024-11-20 05:29:48.749659] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20add40, cid 4, qid 0 00:19:34.346 [2024-11-20 05:29:48.749780] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.346 [2024-11-20 05:29:48.749801] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.346 [2024-11-20 05:29:48.749806] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.346 [2024-11-20 05:29:48.749811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20add40) on tqpair=0x2049750 00:19:34.346 [2024-11-20 05:29:48.749817] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:19:34.346 [2024-11-20 05:29:48.749823] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:34.346 [2024-11-20 05:29:48.749833] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:19:34.346 [2024-11-20 05:29:48.749846] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:34.346 [2024-11-20 05:29:48.749854] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.346 [2024-11-20 05:29:48.749859] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.346 [2024-11-20 05:29:48.749863] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2049750) 00:19:34.346 [2024-11-20 05:29:48.749872] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:34.346 [2024-11-20 05:29:48.749926] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20add40, cid 4, qid 0 00:19:34.346 [2024-11-20 05:29:48.750027] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.346 [2024-11-20 05:29:48.750054] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.346 [2024-11-20 05:29:48.750060] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.346 [2024-11-20 05:29:48.750065] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20add40) on tqpair=0x2049750 00:19:34.346 [2024-11-20 05:29:48.750158] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:19:34.346 [2024-11-20 05:29:48.750182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:34.346 [2024-11-20 05:29:48.750198] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.346 [2024-11-20 05:29:48.750203] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2049750) 00:19:34.346 [2024-11-20 05:29:48.750212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.346 [2024-11-20 05:29:48.750240] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20add40, cid 4, qid 0 00:19:34.346 [2024-11-20 05:29:48.750363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:34.346 [2024-11-20 05:29:48.750381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:34.346 [2024-11-20 05:29:48.750386] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:34.346 [2024-11-20 05:29:48.750390] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2049750): datao=0, datal=4096, cccid=4 00:19:34.346 [2024-11-20 05:29:48.750398] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20add40) on tqpair(0x2049750): expected_datao=0, payload_size=4096 00:19:34.346 [2024-11-20 05:29:48.750405] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.346 [2024-11-20 05:29:48.750418] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:34.346 [2024-11-20 05:29:48.750425] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:34.346 [2024-11-20 05:29:48.750452] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.346 [2024-11-20 05:29:48.750471] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.346 [2024-11-20 05:29:48.750479] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.346 [2024-11-20 05:29:48.750485] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20add40) on tqpair=0x2049750 00:19:34.346 [2024-11-20 05:29:48.750511] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:19:34.346 [2024-11-20 05:29:48.750530] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:19:34.346 [2024-11-20 05:29:48.750545] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:19:34.346 [2024-11-20 05:29:48.750556] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.346 [2024-11-20 05:29:48.750563] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2049750) 00:19:34.346 [2024-11-20 05:29:48.750576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.346 [2024-11-20 05:29:48.750614] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20add40, cid 4, qid 0 00:19:34.346 [2024-11-20 05:29:48.750862] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:34.346 [2024-11-20 05:29:48.750884] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:34.346 [2024-11-20 05:29:48.750889] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:34.346 [2024-11-20 05:29:48.750896] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2049750): datao=0, datal=4096, cccid=4 00:19:34.346 [2024-11-20 05:29:48.750923] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20add40) on tqpair(0x2049750): expected_datao=0, payload_size=4096 00:19:34.346 [2024-11-20 05:29:48.750931] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.346 [2024-11-20 05:29:48.750941] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:34.346 [2024-11-20 05:29:48.750949] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:34.346 [2024-11-20 05:29:48.750963] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.346 [2024-11-20 05:29:48.750973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.346 [2024-11-20 05:29:48.750980] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.346 [2024-11-20 05:29:48.750987] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20add40) on tqpair=0x2049750 00:19:34.346 [2024-11-20 05:29:48.751019] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:34.346 [2024-11-20 05:29:48.751039] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:34.346 [2024-11-20 05:29:48.751056] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.346 [2024-11-20 05:29:48.751063] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2049750) 00:19:34.346 [2024-11-20 05:29:48.751075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.346 [2024-11-20 05:29:48.751112] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20add40, cid 4, qid 0 00:19:34.346 [2024-11-20 05:29:48.751340] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:34.346 [2024-11-20 05:29:48.751371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:34.346 [2024-11-20 05:29:48.751381] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:34.347 [2024-11-20 05:29:48.751388] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2049750): datao=0, datal=4096, cccid=4 00:19:34.347 [2024-11-20 05:29:48.751396] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20add40) on tqpair(0x2049750): expected_datao=0, payload_size=4096 00:19:34.347 [2024-11-20 05:29:48.751404] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.347 [2024-11-20 05:29:48.751416] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:34.347 [2024-11-20 05:29:48.751424] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:34.347 [2024-11-20 05:29:48.751439] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.347 [2024-11-20 05:29:48.751449] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.347 [2024-11-20 05:29:48.751456] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.347 [2024-11-20 05:29:48.751463] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20add40) on tqpair=0x2049750 00:19:34.347 [2024-11-20 05:29:48.751479] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:34.347 [2024-11-20 05:29:48.751496] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:19:34.347 [2024-11-20 05:29:48.751515] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:19:34.347 [2024-11-20 05:29:48.751528] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:19:34.347 [2024-11-20 05:29:48.751538] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:34.347 [2024-11-20 05:29:48.751547] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:19:34.347 [2024-11-20 05:29:48.751553] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:19:34.347 [2024-11-20 05:29:48.751558] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:19:34.347 [2024-11-20 05:29:48.751565] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:19:34.347 [2024-11-20 05:29:48.751589] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.347 [2024-11-20 05:29:48.751598] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2049750) 00:19:34.347 [2024-11-20 05:29:48.751611] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.347 [2024-11-20 05:29:48.751624] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.347 [2024-11-20 05:29:48.751632] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.347 [2024-11-20 05:29:48.751638] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2049750) 00:19:34.347 [2024-11-20 05:29:48.751648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.347 [2024-11-20 05:29:48.751696] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20add40, cid 4, qid 0 00:19:34.347 [2024-11-20 05:29:48.751710] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20adec0, cid 5, qid 0 00:19:34.347 [2024-11-20 05:29:48.751825] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.347 [2024-11-20 05:29:48.751840] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.347 [2024-11-20 05:29:48.751847] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.347 [2024-11-20 05:29:48.751854] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20add40) on tqpair=0x2049750 00:19:34.347 [2024-11-20 05:29:48.751865] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.347 [2024-11-20 05:29:48.751874] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.347 [2024-11-20 05:29:48.751880] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.347 [2024-11-20 05:29:48.751887] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20adec0) on tqpair=0x2049750 00:19:34.347 [2024-11-20 05:29:48.751930] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.347 [2024-11-20 05:29:48.751941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2049750) 00:19:34.347 [2024-11-20 05:29:48.751954] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.347 [2024-11-20 05:29:48.751990] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20adec0, cid 5, qid 0 00:19:34.347 [2024-11-20 05:29:48.752073] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.347 [2024-11-20 05:29:48.752084] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.347 [2024-11-20 05:29:48.752091] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.347 [2024-11-20 05:29:48.752098] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20adec0) on tqpair=0x2049750 00:19:34.347 [2024-11-20 05:29:48.752115] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.347 [2024-11-20 05:29:48.752124] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2049750) 00:19:34.347 [2024-11-20 05:29:48.752135] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.347 [2024-11-20 05:29:48.752167] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20adec0, cid 5, qid 0 00:19:34.347 [2024-11-20 05:29:48.752273] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.347 [2024-11-20 05:29:48.752297] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.347 [2024-11-20 05:29:48.752306] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.347 [2024-11-20 05:29:48.752313] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20adec0) on tqpair=0x2049750 00:19:34.347 [2024-11-20 05:29:48.752333] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.347 [2024-11-20 05:29:48.752341] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2049750) 00:19:34.347 [2024-11-20 05:29:48.752354] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.347 [2024-11-20 05:29:48.752391] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20adec0, cid 5, qid 0 00:19:34.347 [2024-11-20 05:29:48.752473] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.347 [2024-11-20 05:29:48.752486] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.347 [2024-11-20 05:29:48.752490] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.347 [2024-11-20 05:29:48.752494] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20adec0) on tqpair=0x2049750 00:19:34.347 [2024-11-20 05:29:48.752519] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.347 [2024-11-20 05:29:48.752525] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2049750) 00:19:34.347 [2024-11-20 05:29:48.752534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.347 [2024-11-20 05:29:48.752543] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.347 [2024-11-20 05:29:48.752547] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2049750) 00:19:34.347 [2024-11-20 05:29:48.752554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.347 [2024-11-20 05:29:48.752562] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.347 [2024-11-20 05:29:48.752567] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2049750) 00:19:34.347 [2024-11-20 05:29:48.752574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.347 [2024-11-20 05:29:48.752582] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.347 [2024-11-20 05:29:48.752588] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2049750) 00:19:34.347 [2024-11-20 05:29:48.752596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.347 [2024-11-20 05:29:48.752625] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20adec0, cid 5, qid 0 00:19:34.347 [2024-11-20 05:29:48.752636] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20add40, cid 4, qid 0 00:19:34.347 [2024-11-20 05:29:48.752644] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ae040, cid 6, qid 0 00:19:34.347 [2024-11-20 05:29:48.752666] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ae1c0, cid 7, qid 0 00:19:34.347 [2024-11-20 05:29:48.752847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:34.347 [2024-11-20 05:29:48.752862] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:34.347 [2024-11-20 05:29:48.752871] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:34.347 [2024-11-20 05:29:48.752875] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2049750): datao=0, datal=8192, cccid=5 00:19:34.347 [2024-11-20 05:29:48.752881] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20adec0) on tqpair(0x2049750): expected_datao=0, payload_size=8192 00:19:34.347 [2024-11-20 05:29:48.752886] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.347 [2024-11-20 05:29:48.756931] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:34.347 [2024-11-20 05:29:48.756964] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:34.347 [2024-11-20 05:29:48.756974] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:34.347 [2024-11-20 05:29:48.756982] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:34.347 [2024-11-20 05:29:48.756986] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:34.347 [2024-11-20 05:29:48.756990] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2049750): datao=0, datal=512, cccid=4 00:19:34.347 [2024-11-20 05:29:48.756996] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20add40) on tqpair(0x2049750): expected_datao=0, payload_size=512 00:19:34.347 [2024-11-20 05:29:48.757002] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.347 [2024-11-20 05:29:48.757010] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:34.347 [2024-11-20 05:29:48.757015] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:34.347 [2024-11-20 05:29:48.757021] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:34.347 [2024-11-20 05:29:48.757027] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:34.347 [2024-11-20 05:29:48.757031] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:34.347 [2024-11-20 05:29:48.757035] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2049750): datao=0, datal=512, cccid=6 00:19:34.347 [2024-11-20 05:29:48.757039] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20ae040) on tqpair(0x2049750): expected_datao=0, payload_size=512 00:19:34.347 [2024-11-20 05:29:48.757044] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.347 [2024-11-20 05:29:48.757052] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:34.348 [2024-11-20 05:29:48.757058] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:34.348 [2024-11-20 05:29:48.757068] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:34.348 [2024-11-20 05:29:48.757079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:34.348 [2024-11-20 05:29:48.757085] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:34.348 [2024-11-20 05:29:48.757091] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2049750): datao=0, datal=4096, cccid=7 00:19:34.348 [2024-11-20 05:29:48.757098] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20ae1c0) on tqpair(0x2049750): expected_datao=0, payload_size=4096 00:19:34.348 [2024-11-20 05:29:48.757104] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.348 [2024-11-20 05:29:48.757115] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:34.348 [2024-11-20 05:29:48.757122] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:34.348 [2024-11-20 05:29:48.757130] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.348 [2024-11-20 05:29:48.757136] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.348 [2024-11-20 05:29:48.757140] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.348 [2024-11-20 05:29:48.757145] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20adec0) on tqpair=0x2049750 00:19:34.348 [2024-11-20 05:29:48.757171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.348 [2024-11-20 05:29:48.757179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.348 [2024-11-20 05:29:48.757183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.348 [2024-11-20 05:29:48.757187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20add40) on tqpair=0x2049750 00:19:34.348 [2024-11-20 05:29:48.757200] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.348 ===================================================== 00:19:34.348 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:34.348 ===================================================== 00:19:34.348 Controller Capabilities/Features 00:19:34.348 ================================ 00:19:34.348 Vendor ID: 8086 00:19:34.348 Subsystem Vendor ID: 8086 00:19:34.348 Serial Number: SPDK00000000000001 00:19:34.348 Model Number: SPDK bdev Controller 00:19:34.348 Firmware Version: 25.01 00:19:34.348 Recommended Arb Burst: 6 00:19:34.348 IEEE OUI Identifier: e4 d2 5c 00:19:34.348 Multi-path I/O 00:19:34.348 May have multiple subsystem ports: Yes 00:19:34.348 May have multiple controllers: Yes 00:19:34.348 Associated with SR-IOV VF: No 00:19:34.348 Max Data Transfer Size: 131072 00:19:34.348 Max Number of Namespaces: 32 00:19:34.348 Max Number of I/O Queues: 127 00:19:34.348 NVMe Specification Version (VS): 1.3 00:19:34.348 NVMe Specification Version (Identify): 1.3 00:19:34.348 Maximum Queue Entries: 128 00:19:34.348 Contiguous Queues Required: Yes 00:19:34.348 Arbitration Mechanisms Supported 00:19:34.348 Weighted Round Robin: Not Supported 00:19:34.348 Vendor Specific: Not Supported 00:19:34.348 Reset Timeout: 15000 ms 00:19:34.348 Doorbell Stride: 4 bytes 00:19:34.348 NVM Subsystem Reset: Not Supported 00:19:34.348 Command Sets Supported 00:19:34.348 NVM Command Set: Supported 00:19:34.348 Boot Partition: Not Supported 00:19:34.348 Memory Page Size Minimum: 4096 bytes 00:19:34.348 Memory Page Size Maximum: 4096 bytes 00:19:34.348 Persistent Memory Region: Not Supported 00:19:34.348 Optional Asynchronous Events Supported 00:19:34.348 Namespace Attribute Notices: Supported 00:19:34.348 Firmware Activation Notices: Not Supported 00:19:34.348 ANA Change Notices: Not Supported 00:19:34.348 PLE Aggregate Log Change Notices: Not Supported 00:19:34.348 LBA Status Info Alert Notices: Not Supported 00:19:34.348 EGE Aggregate Log Change Notices: Not Supported 00:19:34.348 Normal NVM Subsystem Shutdown event: Not Supported 00:19:34.348 Zone Descriptor Change Notices: Not Supported 00:19:34.348 Discovery Log Change Notices: Not Supported 00:19:34.348 Controller Attributes 00:19:34.348 128-bit Host Identifier: Supported 00:19:34.348 Non-Operational Permissive Mode: Not Supported 00:19:34.348 NVM Sets: Not Supported 00:19:34.348 Read Recovery Levels: Not Supported 00:19:34.348 Endurance Groups: Not Supported 00:19:34.348 Predictable Latency Mode: Not Supported 00:19:34.348 Traffic Based Keep ALive: Not Supported 00:19:34.348 Namespace Granularity: Not Supported 00:19:34.348 SQ Associations: Not Supported 00:19:34.348 UUID List: Not Supported 00:19:34.348 Multi-Domain Subsystem: Not Supported 00:19:34.348 Fixed Capacity Management: Not Supported 00:19:34.348 Variable Capacity Management: Not Supported 00:19:34.348 Delete Endurance Group: Not Supported 00:19:34.348 Delete NVM Set: Not Supported 00:19:34.348 Extended LBA Formats Supported: Not Supported 00:19:34.348 Flexible Data Placement Supported: Not Supported 00:19:34.348 00:19:34.348 Controller Memory Buffer Support 00:19:34.348 ================================ 00:19:34.348 Supported: No 00:19:34.348 00:19:34.348 Persistent Memory Region Support 00:19:34.348 ================================ 00:19:34.348 Supported: No 00:19:34.348 00:19:34.348 Admin Command Set Attributes 00:19:34.348 ============================ 00:19:34.348 Security Send/Receive: Not Supported 00:19:34.348 Format NVM: Not Supported 00:19:34.348 Firmware Activate/Download: Not Supported 00:19:34.348 Namespace Management: Not Supported 00:19:34.348 Device Self-Test: Not Supported 00:19:34.348 Directives: Not Supported 00:19:34.348 NVMe-MI: Not Supported 00:19:34.348 Virtualization Management: Not Supported 00:19:34.348 Doorbell Buffer Config: Not Supported 00:19:34.348 Get LBA Status Capability: Not Supported 00:19:34.348 Command & Feature Lockdown Capability: Not Supported 00:19:34.348 Abort Command Limit: 4 00:19:34.348 Async Event Request Limit: 4 00:19:34.348 Number of Firmware Slots: N/A 00:19:34.348 Firmware Slot 1 Read-Only: N/A 00:19:34.348 Firmware Activation Without Reset: [2024-11-20 05:29:48.757206] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.348 [2024-11-20 05:29:48.757210] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.348 [2024-11-20 05:29:48.757214] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ae040) on tqpair=0x2049750 00:19:34.348 [2024-11-20 05:29:48.757222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.348 [2024-11-20 05:29:48.757229] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.348 [2024-11-20 05:29:48.757232] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.348 [2024-11-20 05:29:48.757237] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ae1c0) on tqpair=0x2049750 00:19:34.348 N/A 00:19:34.348 Multiple Update Detection Support: N/A 00:19:34.348 Firmware Update Granularity: No Information Provided 00:19:34.348 Per-Namespace SMART Log: No 00:19:34.348 Asymmetric Namespace Access Log Page: Not Supported 00:19:34.348 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:19:34.348 Command Effects Log Page: Supported 00:19:34.348 Get Log Page Extended Data: Supported 00:19:34.348 Telemetry Log Pages: Not Supported 00:19:34.348 Persistent Event Log Pages: Not Supported 00:19:34.348 Supported Log Pages Log Page: May Support 00:19:34.348 Commands Supported & Effects Log Page: Not Supported 00:19:34.348 Feature Identifiers & Effects Log Page:May Support 00:19:34.348 NVMe-MI Commands & Effects Log Page: May Support 00:19:34.348 Data Area 4 for Telemetry Log: Not Supported 00:19:34.348 Error Log Page Entries Supported: 128 00:19:34.348 Keep Alive: Supported 00:19:34.348 Keep Alive Granularity: 10000 ms 00:19:34.348 00:19:34.348 NVM Command Set Attributes 00:19:34.348 ========================== 00:19:34.348 Submission Queue Entry Size 00:19:34.348 Max: 64 00:19:34.348 Min: 64 00:19:34.348 Completion Queue Entry Size 00:19:34.348 Max: 16 00:19:34.348 Min: 16 00:19:34.348 Number of Namespaces: 32 00:19:34.348 Compare Command: Supported 00:19:34.348 Write Uncorrectable Command: Not Supported 00:19:34.348 Dataset Management Command: Supported 00:19:34.348 Write Zeroes Command: Supported 00:19:34.348 Set Features Save Field: Not Supported 00:19:34.348 Reservations: Supported 00:19:34.348 Timestamp: Not Supported 00:19:34.348 Copy: Supported 00:19:34.348 Volatile Write Cache: Present 00:19:34.348 Atomic Write Unit (Normal): 1 00:19:34.348 Atomic Write Unit (PFail): 1 00:19:34.348 Atomic Compare & Write Unit: 1 00:19:34.348 Fused Compare & Write: Supported 00:19:34.348 Scatter-Gather List 00:19:34.348 SGL Command Set: Supported 00:19:34.348 SGL Keyed: Supported 00:19:34.348 SGL Bit Bucket Descriptor: Not Supported 00:19:34.348 SGL Metadata Pointer: Not Supported 00:19:34.348 Oversized SGL: Not Supported 00:19:34.348 SGL Metadata Address: Not Supported 00:19:34.348 SGL Offset: Supported 00:19:34.348 Transport SGL Data Block: Not Supported 00:19:34.348 Replay Protected Memory Block: Not Supported 00:19:34.348 00:19:34.348 Firmware Slot Information 00:19:34.348 ========================= 00:19:34.348 Active slot: 1 00:19:34.348 Slot 1 Firmware Revision: 25.01 00:19:34.348 00:19:34.348 00:19:34.348 Commands Supported and Effects 00:19:34.348 ============================== 00:19:34.348 Admin Commands 00:19:34.348 -------------- 00:19:34.348 Get Log Page (02h): Supported 00:19:34.348 Identify (06h): Supported 00:19:34.348 Abort (08h): Supported 00:19:34.349 Set Features (09h): Supported 00:19:34.349 Get Features (0Ah): Supported 00:19:34.349 Asynchronous Event Request (0Ch): Supported 00:19:34.349 Keep Alive (18h): Supported 00:19:34.349 I/O Commands 00:19:34.349 ------------ 00:19:34.349 Flush (00h): Supported LBA-Change 00:19:34.349 Write (01h): Supported LBA-Change 00:19:34.349 Read (02h): Supported 00:19:34.349 Compare (05h): Supported 00:19:34.349 Write Zeroes (08h): Supported LBA-Change 00:19:34.349 Dataset Management (09h): Supported LBA-Change 00:19:34.349 Copy (19h): Supported LBA-Change 00:19:34.349 00:19:34.349 Error Log 00:19:34.349 ========= 00:19:34.349 00:19:34.349 Arbitration 00:19:34.349 =========== 00:19:34.349 Arbitration Burst: 1 00:19:34.349 00:19:34.349 Power Management 00:19:34.349 ================ 00:19:34.349 Number of Power States: 1 00:19:34.349 Current Power State: Power State #0 00:19:34.349 Power State #0: 00:19:34.349 Max Power: 0.00 W 00:19:34.349 Non-Operational State: Operational 00:19:34.349 Entry Latency: Not Reported 00:19:34.349 Exit Latency: Not Reported 00:19:34.349 Relative Read Throughput: 0 00:19:34.349 Relative Read Latency: 0 00:19:34.349 Relative Write Throughput: 0 00:19:34.349 Relative Write Latency: 0 00:19:34.349 Idle Power: Not Reported 00:19:34.349 Active Power: Not Reported 00:19:34.349 Non-Operational Permissive Mode: Not Supported 00:19:34.349 00:19:34.349 Health Information 00:19:34.349 ================== 00:19:34.349 Critical Warnings: 00:19:34.349 Available Spare Space: OK 00:19:34.349 Temperature: OK 00:19:34.349 Device Reliability: OK 00:19:34.349 Read Only: No 00:19:34.349 Volatile Memory Backup: OK 00:19:34.349 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:34.349 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:34.349 Available Spare: 0% 00:19:34.349 Available Spare Threshold: 0% 00:19:34.349 Life Percentage Used:[2024-11-20 05:29:48.757358] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.349 [2024-11-20 05:29:48.757366] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2049750) 00:19:34.349 [2024-11-20 05:29:48.757378] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.349 [2024-11-20 05:29:48.757418] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ae1c0, cid 7, qid 0 00:19:34.349 [2024-11-20 05:29:48.757532] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.349 [2024-11-20 05:29:48.757540] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.349 [2024-11-20 05:29:48.757543] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.349 [2024-11-20 05:29:48.757548] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ae1c0) on tqpair=0x2049750 00:19:34.349 [2024-11-20 05:29:48.757598] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:19:34.349 [2024-11-20 05:29:48.757618] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ad740) on tqpair=0x2049750 00:19:34.349 [2024-11-20 05:29:48.757630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.349 [2024-11-20 05:29:48.757641] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ad8c0) on tqpair=0x2049750 00:19:34.349 [2024-11-20 05:29:48.757649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.349 [2024-11-20 05:29:48.757657] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ada40) on tqpair=0x2049750 00:19:34.349 [2024-11-20 05:29:48.757664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.349 [2024-11-20 05:29:48.757673] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20adbc0) on tqpair=0x2049750 00:19:34.349 [2024-11-20 05:29:48.757682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.349 [2024-11-20 05:29:48.757697] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.349 [2024-11-20 05:29:48.757702] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.349 [2024-11-20 05:29:48.757706] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2049750) 00:19:34.349 [2024-11-20 05:29:48.757716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.349 [2024-11-20 05:29:48.757756] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20adbc0, cid 3, qid 0 00:19:34.349 [2024-11-20 05:29:48.757835] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.349 [2024-11-20 05:29:48.757862] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.349 [2024-11-20 05:29:48.757872] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.349 [2024-11-20 05:29:48.757879] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20adbc0) on tqpair=0x2049750 00:19:34.349 [2024-11-20 05:29:48.757895] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.349 [2024-11-20 05:29:48.757924] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.349 [2024-11-20 05:29:48.757932] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2049750) 00:19:34.349 [2024-11-20 05:29:48.757946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.349 [2024-11-20 05:29:48.757983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20adbc0, cid 3, qid 0 00:19:34.349 [2024-11-20 05:29:48.758100] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.349 [2024-11-20 05:29:48.758119] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.349 [2024-11-20 05:29:48.758123] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.349 [2024-11-20 05:29:48.758128] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20adbc0) on tqpair=0x2049750 00:19:34.349 [2024-11-20 05:29:48.758134] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:19:34.349 [2024-11-20 05:29:48.758140] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:19:34.349 [2024-11-20 05:29:48.758152] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.349 [2024-11-20 05:29:48.758158] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.349 [2024-11-20 05:29:48.758162] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2049750) 00:19:34.349 [2024-11-20 05:29:48.758170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.349 [2024-11-20 05:29:48.758193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20adbc0, cid 3, qid 0 00:19:34.349 [2024-11-20 05:29:48.758277] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.349 [2024-11-20 05:29:48.758288] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.349 [2024-11-20 05:29:48.758292] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.349 [2024-11-20 05:29:48.758296] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20adbc0) on tqpair=0x2049750 00:19:34.349 [2024-11-20 05:29:48.758310] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.349 [2024-11-20 05:29:48.758315] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.349 [2024-11-20 05:29:48.758320] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2049750) 00:19:34.349 [2024-11-20 05:29:48.758328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.349 [2024-11-20 05:29:48.758351] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20adbc0, cid 3, qid 0 00:19:34.349 [2024-11-20 05:29:48.758429] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.349 [2024-11-20 05:29:48.758437] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.349 [2024-11-20 05:29:48.758440] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.349 [2024-11-20 05:29:48.758445] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20adbc0) on tqpair=0x2049750 00:19:34.349 [2024-11-20 05:29:48.758456] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.349 [2024-11-20 05:29:48.758461] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.349 [2024-11-20 05:29:48.758465] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2049750) 00:19:34.349 [2024-11-20 05:29:48.758473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.349 [2024-11-20 05:29:48.758492] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20adbc0, cid 3, qid 0 00:19:34.349 [2024-11-20 05:29:48.758566] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.349 [2024-11-20 05:29:48.758578] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.350 [2024-11-20 05:29:48.758582] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.758586] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20adbc0) on tqpair=0x2049750 00:19:34.350 [2024-11-20 05:29:48.758598] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.758603] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.758607] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2049750) 00:19:34.350 [2024-11-20 05:29:48.758615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.350 [2024-11-20 05:29:48.758634] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20adbc0, cid 3, qid 0 00:19:34.350 [2024-11-20 05:29:48.758706] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.350 [2024-11-20 05:29:48.758713] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.350 [2024-11-20 05:29:48.758717] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.758721] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20adbc0) on tqpair=0x2049750 00:19:34.350 [2024-11-20 05:29:48.758732] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.758737] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.758742] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2049750) 00:19:34.350 [2024-11-20 05:29:48.758749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.350 [2024-11-20 05:29:48.758767] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20adbc0, cid 3, qid 0 00:19:34.350 [2024-11-20 05:29:48.758848] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.350 [2024-11-20 05:29:48.758859] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.350 [2024-11-20 05:29:48.758864] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.758871] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20adbc0) on tqpair=0x2049750 00:19:34.350 [2024-11-20 05:29:48.758886] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.758893] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.758899] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2049750) 00:19:34.350 [2024-11-20 05:29:48.758935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.350 [2024-11-20 05:29:48.758962] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20adbc0, cid 3, qid 0 00:19:34.350 [2024-11-20 05:29:48.759043] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.350 [2024-11-20 05:29:48.759059] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.350 [2024-11-20 05:29:48.759064] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.759069] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20adbc0) on tqpair=0x2049750 00:19:34.350 [2024-11-20 05:29:48.759082] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.759087] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.759091] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2049750) 00:19:34.350 [2024-11-20 05:29:48.759099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.350 [2024-11-20 05:29:48.759122] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20adbc0, cid 3, qid 0 00:19:34.350 [2024-11-20 05:29:48.759193] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.350 [2024-11-20 05:29:48.759205] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.350 [2024-11-20 05:29:48.759211] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.759218] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20adbc0) on tqpair=0x2049750 00:19:34.350 [2024-11-20 05:29:48.759235] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.759243] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.759250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2049750) 00:19:34.350 [2024-11-20 05:29:48.759262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.350 [2024-11-20 05:29:48.759327] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20adbc0, cid 3, qid 0 00:19:34.350 [2024-11-20 05:29:48.759381] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.350 [2024-11-20 05:29:48.759396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.350 [2024-11-20 05:29:48.759401] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.759405] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20adbc0) on tqpair=0x2049750 00:19:34.350 [2024-11-20 05:29:48.759418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.759423] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.759428] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2049750) 00:19:34.350 [2024-11-20 05:29:48.759436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.350 [2024-11-20 05:29:48.759457] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20adbc0, cid 3, qid 0 00:19:34.350 [2024-11-20 05:29:48.759529] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.350 [2024-11-20 05:29:48.759543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.350 [2024-11-20 05:29:48.759547] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.759552] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20adbc0) on tqpair=0x2049750 00:19:34.350 [2024-11-20 05:29:48.759563] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.759569] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.759573] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2049750) 00:19:34.350 [2024-11-20 05:29:48.759581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.350 [2024-11-20 05:29:48.759601] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20adbc0, cid 3, qid 0 00:19:34.350 [2024-11-20 05:29:48.759678] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.350 [2024-11-20 05:29:48.759697] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.350 [2024-11-20 05:29:48.759702] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.759707] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20adbc0) on tqpair=0x2049750 00:19:34.350 [2024-11-20 05:29:48.759719] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.759725] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.759729] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2049750) 00:19:34.350 [2024-11-20 05:29:48.759737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.350 [2024-11-20 05:29:48.759776] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20adbc0, cid 3, qid 0 00:19:34.350 [2024-11-20 05:29:48.759858] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.350 [2024-11-20 05:29:48.759871] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.350 [2024-11-20 05:29:48.759878] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.759886] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20adbc0) on tqpair=0x2049750 00:19:34.350 [2024-11-20 05:29:48.759923] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.759931] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.759935] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2049750) 00:19:34.350 [2024-11-20 05:29:48.759944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.350 [2024-11-20 05:29:48.759973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20adbc0, cid 3, qid 0 00:19:34.350 [2024-11-20 05:29:48.760043] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.350 [2024-11-20 05:29:48.760053] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.350 [2024-11-20 05:29:48.760057] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.760062] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20adbc0) on tqpair=0x2049750 00:19:34.350 [2024-11-20 05:29:48.760077] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.760086] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.760092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2049750) 00:19:34.350 [2024-11-20 05:29:48.760104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.350 [2024-11-20 05:29:48.760130] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20adbc0, cid 3, qid 0 00:19:34.350 [2024-11-20 05:29:48.760207] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.350 [2024-11-20 05:29:48.760220] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.350 [2024-11-20 05:29:48.760227] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.760234] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20adbc0) on tqpair=0x2049750 00:19:34.350 [2024-11-20 05:29:48.760255] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.760270] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.760277] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2049750) 00:19:34.350 [2024-11-20 05:29:48.760290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.350 [2024-11-20 05:29:48.760325] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20adbc0, cid 3, qid 0 00:19:34.350 [2024-11-20 05:29:48.760393] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.350 [2024-11-20 05:29:48.760401] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.350 [2024-11-20 05:29:48.760405] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.760409] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20adbc0) on tqpair=0x2049750 00:19:34.350 [2024-11-20 05:29:48.760422] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.760427] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.350 [2024-11-20 05:29:48.760431] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2049750) 00:19:34.350 [2024-11-20 05:29:48.760440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.350 [2024-11-20 05:29:48.760460] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20adbc0, cid 3, qid 0 00:19:34.351 [2024-11-20 05:29:48.760528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.351 [2024-11-20 05:29:48.760542] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.351 [2024-11-20 05:29:48.760550] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.351 [2024-11-20 05:29:48.760556] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20adbc0) on tqpair=0x2049750 00:19:34.351 [2024-11-20 05:29:48.760569] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.351 [2024-11-20 05:29:48.760574] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.351 [2024-11-20 05:29:48.760578] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2049750) 00:19:34.351 [2024-11-20 05:29:48.760586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.351 [2024-11-20 05:29:48.760609] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20adbc0, cid 3, qid 0 00:19:34.351 [2024-11-20 05:29:48.760685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.351 [2024-11-20 05:29:48.760699] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.351 [2024-11-20 05:29:48.760706] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.351 [2024-11-20 05:29:48.760711] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20adbc0) on tqpair=0x2049750 00:19:34.351 [2024-11-20 05:29:48.760723] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.351 [2024-11-20 05:29:48.760728] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.351 [2024-11-20 05:29:48.760732] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2049750) 00:19:34.351 [2024-11-20 05:29:48.760740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.351 [2024-11-20 05:29:48.760762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20adbc0, cid 3, qid 0 00:19:34.351 [2024-11-20 05:29:48.760830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.351 [2024-11-20 05:29:48.760846] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.351 [2024-11-20 05:29:48.760850] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.351 [2024-11-20 05:29:48.760855] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20adbc0) on tqpair=0x2049750 00:19:34.351 [2024-11-20 05:29:48.760867] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.351 [2024-11-20 05:29:48.760872] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.351 [2024-11-20 05:29:48.760877] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2049750) 00:19:34.351 [2024-11-20 05:29:48.760885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.351 [2024-11-20 05:29:48.764942] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20adbc0, cid 3, qid 0 00:19:34.351 [2024-11-20 05:29:48.764999] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.351 [2024-11-20 05:29:48.765009] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.351 [2024-11-20 05:29:48.765014] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.351 [2024-11-20 05:29:48.765019] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20adbc0) on tqpair=0x2049750 00:19:34.351 [2024-11-20 05:29:48.765044] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:34.351 [2024-11-20 05:29:48.765050] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:34.351 [2024-11-20 05:29:48.765054] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2049750) 00:19:34.351 [2024-11-20 05:29:48.765067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.351 [2024-11-20 05:29:48.765110] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20adbc0, cid 3, qid 0 00:19:34.351 [2024-11-20 05:29:48.765232] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:34.351 [2024-11-20 05:29:48.765247] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:34.351 [2024-11-20 05:29:48.765254] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:34.351 [2024-11-20 05:29:48.765262] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20adbc0) on tqpair=0x2049750 00:19:34.351 [2024-11-20 05:29:48.765277] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:19:34.351 0% 00:19:34.351 Data Units Read: 0 00:19:34.351 Data Units Written: 0 00:19:34.351 Host Read Commands: 0 00:19:34.351 Host Write Commands: 0 00:19:34.351 Controller Busy Time: 0 minutes 00:19:34.351 Power Cycles: 0 00:19:34.351 Power On Hours: 0 hours 00:19:34.351 Unsafe Shutdowns: 0 00:19:34.351 Unrecoverable Media Errors: 0 00:19:34.351 Lifetime Error Log Entries: 0 00:19:34.351 Warning Temperature Time: 0 minutes 00:19:34.351 Critical Temperature Time: 0 minutes 00:19:34.351 00:19:34.351 Number of Queues 00:19:34.351 ================ 00:19:34.351 Number of I/O Submission Queues: 127 00:19:34.351 Number of I/O Completion Queues: 127 00:19:34.351 00:19:34.351 Active Namespaces 00:19:34.351 ================= 00:19:34.351 Namespace ID:1 00:19:34.351 Error Recovery Timeout: Unlimited 00:19:34.351 Command Set Identifier: NVM (00h) 00:19:34.351 Deallocate: Supported 00:19:34.351 Deallocated/Unwritten Error: Not Supported 00:19:34.351 Deallocated Read Value: Unknown 00:19:34.351 Deallocate in Write Zeroes: Not Supported 00:19:34.351 Deallocated Guard Field: 0xFFFF 00:19:34.351 Flush: Supported 00:19:34.351 Reservation: Supported 00:19:34.351 Namespace Sharing Capabilities: Multiple Controllers 00:19:34.351 Size (in LBAs): 131072 (0GiB) 00:19:34.351 Capacity (in LBAs): 131072 (0GiB) 00:19:34.351 Utilization (in LBAs): 131072 (0GiB) 00:19:34.351 NGUID: ABCDEF0123456789ABCDEF0123456789 00:19:34.351 EUI64: ABCDEF0123456789 00:19:34.351 UUID: ece2a316-a970-4133-8fb6-73cd94fed7fa 00:19:34.351 Thin Provisioning: Not Supported 00:19:34.351 Per-NS Atomic Units: Yes 00:19:34.351 Atomic Boundary Size (Normal): 0 00:19:34.351 Atomic Boundary Size (PFail): 0 00:19:34.351 Atomic Boundary Offset: 0 00:19:34.351 Maximum Single Source Range Length: 65535 00:19:34.351 Maximum Copy Length: 65535 00:19:34.351 Maximum Source Range Count: 1 00:19:34.351 NGUID/EUI64 Never Reused: No 00:19:34.351 Namespace Write Protected: No 00:19:34.351 Number of LBA Formats: 1 00:19:34.351 Current LBA Format: LBA Format #00 00:19:34.351 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:34.351 00:19:34.351 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:19:34.351 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:34.351 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.351 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:34.683 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.683 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:19:34.683 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:19:34.683 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:34.683 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:19:34.683 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:34.683 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:19:34.683 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:34.683 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:34.683 rmmod nvme_tcp 00:19:34.683 rmmod nvme_fabrics 00:19:34.683 rmmod nvme_keyring 00:19:34.683 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:34.683 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:19:34.683 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:19:34.683 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 74551 ']' 00:19:34.683 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 74551 00:19:34.683 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 74551 ']' 00:19:34.683 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 74551 00:19:34.683 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:19:34.683 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:34.683 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74551 00:19:34.683 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:34.683 killing process with pid 74551 00:19:34.683 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:34.683 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74551' 00:19:34.683 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 74551 00:19:34.683 05:29:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 74551 00:19:34.683 05:29:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:34.683 05:29:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:34.683 05:29:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:34.683 05:29:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:19:34.683 05:29:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:19:34.683 05:29:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:34.683 05:29:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:19:34.683 05:29:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:34.683 05:29:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:34.683 05:29:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:34.940 05:29:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:34.940 05:29:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:34.940 05:29:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:34.940 05:29:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:34.940 05:29:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:34.940 05:29:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:34.940 05:29:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:34.940 05:29:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:34.940 05:29:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:34.940 05:29:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:34.940 05:29:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:34.940 05:29:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:34.940 05:29:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:34.940 05:29:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.940 05:29:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:34.940 05:29:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:19:35.198 00:19:35.198 real 0m2.435s 00:19:35.198 user 0m5.170s 00:19:35.198 sys 0m0.731s 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:35.198 ************************************ 00:19:35.198 END TEST nvmf_identify 00:19:35.198 ************************************ 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.198 ************************************ 00:19:35.198 START TEST nvmf_perf 00:19:35.198 ************************************ 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:35.198 * Looking for test storage... 00:19:35.198 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:35.198 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:19:35.457 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:19:35.457 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:35.457 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:19:35.457 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:35.457 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:35.457 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:35.457 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:19:35.457 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:35.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.458 --rc genhtml_branch_coverage=1 00:19:35.458 --rc genhtml_function_coverage=1 00:19:35.458 --rc genhtml_legend=1 00:19:35.458 --rc geninfo_all_blocks=1 00:19:35.458 --rc geninfo_unexecuted_blocks=1 00:19:35.458 00:19:35.458 ' 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:35.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.458 --rc genhtml_branch_coverage=1 00:19:35.458 --rc genhtml_function_coverage=1 00:19:35.458 --rc genhtml_legend=1 00:19:35.458 --rc geninfo_all_blocks=1 00:19:35.458 --rc geninfo_unexecuted_blocks=1 00:19:35.458 00:19:35.458 ' 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:35.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.458 --rc genhtml_branch_coverage=1 00:19:35.458 --rc genhtml_function_coverage=1 00:19:35.458 --rc genhtml_legend=1 00:19:35.458 --rc geninfo_all_blocks=1 00:19:35.458 --rc geninfo_unexecuted_blocks=1 00:19:35.458 00:19:35.458 ' 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:35.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.458 --rc genhtml_branch_coverage=1 00:19:35.458 --rc genhtml_function_coverage=1 00:19:35.458 --rc genhtml_legend=1 00:19:35.458 --rc geninfo_all_blocks=1 00:19:35.458 --rc geninfo_unexecuted_blocks=1 00:19:35.458 00:19:35.458 ' 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:35.458 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:35.458 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:35.459 Cannot find device "nvmf_init_br" 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:35.459 Cannot find device "nvmf_init_br2" 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:35.459 Cannot find device "nvmf_tgt_br" 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:35.459 Cannot find device "nvmf_tgt_br2" 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:35.459 Cannot find device "nvmf_init_br" 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:35.459 Cannot find device "nvmf_init_br2" 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:35.459 Cannot find device "nvmf_tgt_br" 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:35.459 Cannot find device "nvmf_tgt_br2" 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:35.459 Cannot find device "nvmf_br" 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:35.459 Cannot find device "nvmf_init_if" 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:35.459 Cannot find device "nvmf_init_if2" 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:35.459 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:35.459 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:35.459 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:35.718 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:35.718 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:35.718 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:35.718 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:35.718 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:35.718 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:35.718 05:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:35.718 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:35.718 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:35.718 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:35.718 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:35.718 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:35.718 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:35.718 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:35.718 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:35.718 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:35.718 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:35.718 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:35.718 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:35.718 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:35.718 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:35.718 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:35.718 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:35.718 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:19:35.718 00:19:35.718 --- 10.0.0.3 ping statistics --- 00:19:35.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.718 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:19:35.719 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:35.719 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:35.719 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.080 ms 00:19:35.719 00:19:35.719 --- 10.0.0.4 ping statistics --- 00:19:35.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.719 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:19:35.719 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:35.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:35.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:19:35.719 00:19:35.719 --- 10.0.0.1 ping statistics --- 00:19:35.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.719 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:19:35.719 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:35.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:35.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:19:35.719 00:19:35.719 --- 10.0.0.2 ping statistics --- 00:19:35.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.719 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:19:35.719 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:35.719 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:19:35.719 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:35.719 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:35.719 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:35.719 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:35.719 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:35.719 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:35.719 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:35.719 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:19:35.719 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:35.719 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:35.719 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:35.719 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74801 00:19:35.719 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74801 00:19:35.719 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 74801 ']' 00:19:35.719 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:35.719 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.719 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:35.719 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.719 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:35.719 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:35.984 [2024-11-20 05:29:50.251690] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:19:35.984 [2024-11-20 05:29:50.251828] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.984 [2024-11-20 05:29:50.415675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:35.984 [2024-11-20 05:29:50.462165] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:35.984 [2024-11-20 05:29:50.471156] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:35.984 [2024-11-20 05:29:50.471558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:35.984 [2024-11-20 05:29:50.471605] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:35.984 [2024-11-20 05:29:50.471623] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:35.984 [2024-11-20 05:29:50.472559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.984 [2024-11-20 05:29:50.472632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.984 [2024-11-20 05:29:50.472713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:35.984 [2024-11-20 05:29:50.472726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.243 [2024-11-20 05:29:50.520962] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:36.243 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:36.243 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:19:36.243 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:36.243 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:36.243 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:36.501 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.501 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:36.501 05:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:19:37.066 05:29:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:19:37.066 05:29:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:19:37.326 05:29:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:19:37.326 05:29:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:37.892 05:29:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:19:37.892 05:29:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:19:37.892 05:29:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:19:37.892 05:29:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:19:37.892 05:29:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:38.150 [2024-11-20 05:29:52.577659] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:38.150 05:29:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:38.716 05:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:38.716 05:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:39.284 05:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:39.284 05:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:19:39.542 05:29:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:40.108 [2024-11-20 05:29:54.387979] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:40.108 05:29:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:19:40.365 05:29:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:19:40.365 05:29:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:40.365 05:29:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:19:40.365 05:29:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:41.739 Initializing NVMe Controllers 00:19:41.739 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:41.739 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:19:41.739 Initialization complete. Launching workers. 00:19:41.739 ======================================================== 00:19:41.739 Latency(us) 00:19:41.739 Device Information : IOPS MiB/s Average min max 00:19:41.739 PCIE (0000:00:10.0) NSID 1 from core 0: 23520.91 91.88 1360.23 309.68 14759.44 00:19:41.739 ======================================================== 00:19:41.740 Total : 23520.91 91.88 1360.23 309.68 14759.44 00:19:41.740 00:19:41.740 05:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:43.125 Initializing NVMe Controllers 00:19:43.125 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:43.125 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:43.125 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:43.125 Initialization complete. Launching workers. 00:19:43.125 ======================================================== 00:19:43.125 Latency(us) 00:19:43.125 Device Information : IOPS MiB/s Average min max 00:19:43.125 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2493.92 9.74 400.65 137.15 4390.35 00:19:43.125 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.00 0.49 8063.03 7930.82 12044.49 00:19:43.125 ======================================================== 00:19:43.125 Total : 2618.91 10.23 766.36 137.15 12044.49 00:19:43.125 00:19:43.125 05:29:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:44.605 Initializing NVMe Controllers 00:19:44.605 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:44.605 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:44.605 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:44.605 Initialization complete. Launching workers. 00:19:44.605 ======================================================== 00:19:44.605 Latency(us) 00:19:44.605 Device Information : IOPS MiB/s Average min max 00:19:44.605 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6417.47 25.07 4986.87 1008.59 12221.86 00:19:44.605 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4006.05 15.65 7999.35 4853.95 11028.11 00:19:44.605 ======================================================== 00:19:44.605 Total : 10423.52 40.72 6144.65 1008.59 12221.86 00:19:44.605 00:19:44.605 05:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:19:44.605 05:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:47.137 Initializing NVMe Controllers 00:19:47.137 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:47.137 Controller IO queue size 128, less than required. 00:19:47.137 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:47.137 Controller IO queue size 128, less than required. 00:19:47.137 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:47.137 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:47.137 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:47.137 Initialization complete. Launching workers. 00:19:47.137 ======================================================== 00:19:47.137 Latency(us) 00:19:47.137 Device Information : IOPS MiB/s Average min max 00:19:47.137 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1485.88 371.47 88360.58 49029.23 155916.44 00:19:47.137 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 637.02 159.25 210360.30 63648.04 360155.75 00:19:47.137 ======================================================== 00:19:47.137 Total : 2122.90 530.72 124969.11 49029.23 360155.75 00:19:47.137 00:19:47.137 05:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:19:47.396 Initializing NVMe Controllers 00:19:47.396 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:47.396 Controller IO queue size 128, less than required. 00:19:47.396 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:47.396 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:19:47.396 Controller IO queue size 128, less than required. 00:19:47.396 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:47.396 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:19:47.396 WARNING: Some requested NVMe devices were skipped 00:19:47.396 No valid NVMe controllers or AIO or URING devices found 00:19:47.396 05:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:19:50.020 Initializing NVMe Controllers 00:19:50.020 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:50.020 Controller IO queue size 128, less than required. 00:19:50.020 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:50.020 Controller IO queue size 128, less than required. 00:19:50.020 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:50.020 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:50.020 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:50.020 Initialization complete. Launching workers. 00:19:50.020 00:19:50.020 ==================== 00:19:50.020 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:19:50.020 TCP transport: 00:19:50.020 polls: 8962 00:19:50.020 idle_polls: 5755 00:19:50.020 sock_completions: 3207 00:19:50.020 nvme_completions: 5213 00:19:50.020 submitted_requests: 7798 00:19:50.020 queued_requests: 1 00:19:50.020 00:19:50.020 ==================== 00:19:50.020 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:19:50.020 TCP transport: 00:19:50.020 polls: 10431 00:19:50.020 idle_polls: 7275 00:19:50.020 sock_completions: 3156 00:19:50.020 nvme_completions: 5097 00:19:50.020 submitted_requests: 7580 00:19:50.020 queued_requests: 1 00:19:50.020 ======================================================== 00:19:50.020 Latency(us) 00:19:50.020 Device Information : IOPS MiB/s Average min max 00:19:50.020 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1300.88 325.22 100686.08 46984.81 155915.74 00:19:50.020 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1271.92 317.98 102667.88 33820.73 171894.58 00:19:50.020 ======================================================== 00:19:50.020 Total : 2572.80 643.20 101665.83 33820.73 171894.58 00:19:50.020 00:19:50.020 05:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:19:50.020 05:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:50.603 05:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:19:50.603 05:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:19:50.603 05:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:19:50.603 05:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:50.603 05:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:19:50.603 05:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:50.603 05:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:19:50.603 05:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:50.603 05:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:50.603 rmmod nvme_tcp 00:19:50.603 rmmod nvme_fabrics 00:19:50.603 rmmod nvme_keyring 00:19:50.603 05:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:50.603 05:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:19:50.603 05:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:19:50.603 05:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74801 ']' 00:19:50.603 05:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74801 00:19:50.603 05:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 74801 ']' 00:19:50.603 05:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 74801 00:19:50.603 05:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:19:50.603 05:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:50.603 05:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74801 00:19:50.603 killing process with pid 74801 00:19:50.603 05:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:50.603 05:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:50.603 05:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74801' 00:19:50.603 05:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 74801 00:19:50.603 05:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 74801 00:19:50.865 05:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:50.865 05:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:50.865 05:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:50.865 05:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:19:50.865 05:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:19:50.865 05:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:19:50.865 05:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:50.865 05:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:50.865 05:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:50.865 05:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:50.865 05:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:50.865 05:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:50.865 05:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:51.124 05:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:51.124 05:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:51.124 05:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:51.124 05:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:51.124 05:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:51.124 05:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:51.124 05:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:51.124 05:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:51.124 05:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:51.124 05:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:51.124 05:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.124 05:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:51.124 05:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.124 05:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:19:51.124 ************************************ 00:19:51.124 END TEST nvmf_perf 00:19:51.124 ************************************ 00:19:51.124 00:19:51.124 real 0m16.045s 00:19:51.124 user 0m58.842s 00:19:51.124 sys 0m4.564s 00:19:51.124 05:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:51.124 05:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:51.124 05:30:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:19:51.124 05:30:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:51.124 05:30:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:51.124 05:30:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.124 ************************************ 00:19:51.124 START TEST nvmf_fio_host 00:19:51.124 ************************************ 00:19:51.124 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:19:51.384 * Looking for test storage... 00:19:51.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:51.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.384 --rc genhtml_branch_coverage=1 00:19:51.384 --rc genhtml_function_coverage=1 00:19:51.384 --rc genhtml_legend=1 00:19:51.384 --rc geninfo_all_blocks=1 00:19:51.384 --rc geninfo_unexecuted_blocks=1 00:19:51.384 00:19:51.384 ' 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:51.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.384 --rc genhtml_branch_coverage=1 00:19:51.384 --rc genhtml_function_coverage=1 00:19:51.384 --rc genhtml_legend=1 00:19:51.384 --rc geninfo_all_blocks=1 00:19:51.384 --rc geninfo_unexecuted_blocks=1 00:19:51.384 00:19:51.384 ' 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:51.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.384 --rc genhtml_branch_coverage=1 00:19:51.384 --rc genhtml_function_coverage=1 00:19:51.384 --rc genhtml_legend=1 00:19:51.384 --rc geninfo_all_blocks=1 00:19:51.384 --rc geninfo_unexecuted_blocks=1 00:19:51.384 00:19:51.384 ' 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:51.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.384 --rc genhtml_branch_coverage=1 00:19:51.384 --rc genhtml_function_coverage=1 00:19:51.384 --rc genhtml_legend=1 00:19:51.384 --rc geninfo_all_blocks=1 00:19:51.384 --rc geninfo_unexecuted_blocks=1 00:19:51.384 00:19:51.384 ' 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:19:51.384 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:51.385 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:51.385 Cannot find device "nvmf_init_br" 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:51.385 Cannot find device "nvmf_init_br2" 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:51.385 Cannot find device "nvmf_tgt_br" 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:51.385 Cannot find device "nvmf_tgt_br2" 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:51.385 Cannot find device "nvmf_init_br" 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:51.385 Cannot find device "nvmf_init_br2" 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:51.385 Cannot find device "nvmf_tgt_br" 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:51.385 Cannot find device "nvmf_tgt_br2" 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:51.385 Cannot find device "nvmf_br" 00:19:51.385 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:19:51.386 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:51.644 Cannot find device "nvmf_init_if" 00:19:51.644 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:19:51.644 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:51.644 Cannot find device "nvmf_init_if2" 00:19:51.644 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:19:51.645 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:51.645 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:51.645 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:19:51.645 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:51.645 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:51.645 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:19:51.645 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:51.645 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:51.645 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:51.645 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:51.645 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:51.645 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:51.645 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:51.645 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:51.645 05:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:51.645 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:51.645 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:51.645 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:51.645 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:51.645 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:51.645 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:51.645 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:51.645 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:51.645 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:51.645 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:51.645 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:51.645 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:51.645 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:51.645 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:51.645 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:51.645 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:51.645 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:51.645 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:51.645 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:51.645 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:51.645 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:51.645 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:51.645 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:51.645 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:51.645 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:51.645 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:19:51.645 00:19:51.645 --- 10.0.0.3 ping statistics --- 00:19:51.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.645 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:19:51.904 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:51.904 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:51.904 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:19:51.904 00:19:51.904 --- 10.0.0.4 ping statistics --- 00:19:51.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.904 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:19:51.904 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:51.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:51.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:19:51.904 00:19:51.904 --- 10.0.0.1 ping statistics --- 00:19:51.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.904 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:19:51.904 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:51.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:51.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:19:51.904 00:19:51.904 --- 10.0.0.2 ping statistics --- 00:19:51.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.904 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:19:51.904 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:51.904 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:19:51.904 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:51.904 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:51.904 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:51.904 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:51.904 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:51.904 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:51.904 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:51.904 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:19:51.904 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:19:51.904 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:51.904 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.904 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75272 00:19:51.904 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:51.904 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:51.904 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75272 00:19:51.904 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 75272 ']' 00:19:51.904 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.904 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:51.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.904 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.904 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:51.904 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.904 [2024-11-20 05:30:06.271838] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:19:51.904 [2024-11-20 05:30:06.271981] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.163 [2024-11-20 05:30:06.422564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:52.163 [2024-11-20 05:30:06.472759] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.163 [2024-11-20 05:30:06.472839] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.163 [2024-11-20 05:30:06.472856] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.163 [2024-11-20 05:30:06.472868] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.163 [2024-11-20 05:30:06.472879] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.163 [2024-11-20 05:30:06.473977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.163 [2024-11-20 05:30:06.474045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.163 [2024-11-20 05:30:06.474174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:52.163 [2024-11-20 05:30:06.474187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.163 [2024-11-20 05:30:06.512447] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:52.163 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:52.163 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:19:52.163 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:52.729 [2024-11-20 05:30:06.943810] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.729 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:19:52.729 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:52.729 05:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.729 05:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:52.988 Malloc1 00:19:52.988 05:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:53.554 05:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:53.812 05:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:54.378 [2024-11-20 05:30:08.586785] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:54.378 05:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:19:54.711 05:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:19:54.711 05:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:54.711 05:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:54.711 05:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:19:54.711 05:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:54.711 05:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:19:54.711 05:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:54.711 05:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:19:54.711 05:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:19:54.711 05:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:19:54.711 05:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:54.711 05:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:19:54.711 05:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:19:54.711 05:30:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:19:54.711 05:30:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:19:54.711 05:30:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:19:54.711 05:30:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:54.711 05:30:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:19:54.711 05:30:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:19:54.711 05:30:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:19:54.711 05:30:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:19:54.711 05:30:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:54.711 05:30:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:54.711 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:54.711 fio-3.35 00:19:54.711 Starting 1 thread 00:19:57.238 00:19:57.238 test: (groupid=0, jobs=1): err= 0: pid=75353: Wed Nov 20 05:30:11 2024 00:19:57.238 read: IOPS=8043, BW=31.4MiB/s (32.9MB/s)(63.1MiB/2007msec) 00:19:57.238 slat (usec): min=2, max=167, avg= 2.99, stdev= 1.93 00:19:57.238 clat (usec): min=1521, max=13669, avg=8276.42, stdev=1021.54 00:19:57.238 lat (usec): min=1558, max=13671, avg=8279.41, stdev=1021.39 00:19:57.238 clat percentiles (usec): 00:19:57.238 | 1.00th=[ 6718], 5.00th=[ 7046], 10.00th=[ 7242], 20.00th=[ 7504], 00:19:57.238 | 30.00th=[ 7635], 40.00th=[ 7832], 50.00th=[ 8029], 60.00th=[ 8225], 00:19:57.238 | 70.00th=[ 8586], 80.00th=[ 9241], 90.00th=[ 9765], 95.00th=[10290], 00:19:57.238 | 99.00th=[10945], 99.50th=[11207], 99.90th=[12387], 99.95th=[13304], 00:19:57.238 | 99.99th=[13698] 00:19:57.238 bw ( KiB/s): min=31400, max=33032, per=99.98%, avg=32168.00, stdev=762.09, samples=4 00:19:57.238 iops : min= 7850, max= 8258, avg=8042.00, stdev=190.52, samples=4 00:19:57.238 write: IOPS=8024, BW=31.3MiB/s (32.9MB/s)(62.9MiB/2007msec); 0 zone resets 00:19:57.238 slat (usec): min=2, max=127, avg= 3.17, stdev= 1.56 00:19:57.238 clat (usec): min=1131, max=13523, avg=7597.43, stdev=934.44 00:19:57.238 lat (usec): min=1138, max=13525, avg=7600.60, stdev=934.37 00:19:57.238 clat percentiles (usec): 00:19:57.238 | 1.00th=[ 6128], 5.00th=[ 6521], 10.00th=[ 6652], 20.00th=[ 6849], 00:19:57.238 | 30.00th=[ 7046], 40.00th=[ 7177], 50.00th=[ 7373], 60.00th=[ 7570], 00:19:57.238 | 70.00th=[ 7898], 80.00th=[ 8455], 90.00th=[ 8979], 95.00th=[ 9372], 00:19:57.238 | 99.00th=[10028], 99.50th=[10159], 99.90th=[11076], 99.95th=[11994], 00:19:57.238 | 99.99th=[13435] 00:19:57.238 bw ( KiB/s): min=31232, max=32808, per=99.93%, avg=32076.00, stdev=649.02, samples=4 00:19:57.238 iops : min= 7808, max= 8202, avg=8019.00, stdev=162.25, samples=4 00:19:57.238 lat (msec) : 2=0.04%, 4=0.12%, 10=95.49%, 20=4.35% 00:19:57.238 cpu : usr=65.20%, sys=25.67%, ctx=5, majf=0, minf=7 00:19:57.238 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:57.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:57.238 issued rwts: total=16144,16105,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.238 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:57.238 00:19:57.238 Run status group 0 (all jobs): 00:19:57.238 READ: bw=31.4MiB/s (32.9MB/s), 31.4MiB/s-31.4MiB/s (32.9MB/s-32.9MB/s), io=63.1MiB (66.1MB), run=2007-2007msec 00:19:57.238 WRITE: bw=31.3MiB/s (32.9MB/s), 31.3MiB/s-31.3MiB/s (32.9MB/s-32.9MB/s), io=62.9MiB (66.0MB), run=2007-2007msec 00:19:57.238 05:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:19:57.238 05:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:19:57.238 05:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:19:57.238 05:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:57.238 05:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:19:57.238 05:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:57.238 05:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:19:57.238 05:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:19:57.238 05:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:19:57.238 05:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:57.238 05:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:19:57.238 05:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:19:57.238 05:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:19:57.238 05:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:19:57.238 05:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:19:57.238 05:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:57.238 05:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:19:57.238 05:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:19:57.238 05:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:19:57.238 05:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:19:57.238 05:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:57.238 05:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:19:57.238 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:19:57.238 fio-3.35 00:19:57.238 Starting 1 thread 00:19:59.769 00:19:59.769 test: (groupid=0, jobs=1): err= 0: pid=75396: Wed Nov 20 05:30:14 2024 00:19:59.769 read: IOPS=7729, BW=121MiB/s (127MB/s)(242MiB/2006msec) 00:19:59.769 slat (usec): min=3, max=121, avg= 4.09, stdev= 1.83 00:19:59.770 clat (usec): min=2676, max=17987, avg=9013.20, stdev=2549.36 00:19:59.770 lat (usec): min=2680, max=17990, avg=9017.29, stdev=2549.49 00:19:59.770 clat percentiles (usec): 00:19:59.770 | 1.00th=[ 4359], 5.00th=[ 5211], 10.00th=[ 5735], 20.00th=[ 6718], 00:19:59.770 | 30.00th=[ 7504], 40.00th=[ 8225], 50.00th=[ 8848], 60.00th=[ 9503], 00:19:59.770 | 70.00th=[10290], 80.00th=[11207], 90.00th=[12518], 95.00th=[13435], 00:19:59.770 | 99.00th=[15401], 99.50th=[15795], 99.90th=[16319], 99.95th=[16581], 00:19:59.770 | 99.99th=[16909] 00:19:59.770 bw ( KiB/s): min=60128, max=71200, per=52.24%, avg=64608.00, stdev=4684.33, samples=4 00:19:59.770 iops : min= 3758, max= 4450, avg=4038.00, stdev=292.77, samples=4 00:19:59.770 write: IOPS=4579, BW=71.6MiB/s (75.0MB/s)(132MiB/1845msec); 0 zone resets 00:19:59.770 slat (usec): min=37, max=215, avg=40.11, stdev= 5.13 00:19:59.770 clat (usec): min=3308, max=23890, avg=12958.16, stdev=2423.88 00:19:59.770 lat (usec): min=3351, max=23931, avg=12998.27, stdev=2424.72 00:19:59.770 clat percentiles (usec): 00:19:59.770 | 1.00th=[ 8225], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[10814], 00:19:59.770 | 30.00th=[11469], 40.00th=[12256], 50.00th=[12780], 60.00th=[13435], 00:19:59.770 | 70.00th=[14222], 80.00th=[15139], 90.00th=[16319], 95.00th=[16909], 00:19:59.770 | 99.00th=[18482], 99.50th=[19792], 99.90th=[23200], 99.95th=[23725], 00:19:59.770 | 99.99th=[23987] 00:19:59.770 bw ( KiB/s): min=61984, max=74016, per=91.74%, avg=67216.00, stdev=5048.06, samples=4 00:19:59.770 iops : min= 3874, max= 4626, avg=4201.00, stdev=315.50, samples=4 00:19:59.770 lat (msec) : 4=0.39%, 10=46.08%, 20=53.39%, 50=0.14% 00:19:59.770 cpu : usr=79.90%, sys=15.06%, ctx=5, majf=0, minf=12 00:19:59.770 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:59.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:59.770 issued rwts: total=15505,8449,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.770 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:59.770 00:19:59.770 Run status group 0 (all jobs): 00:19:59.770 READ: bw=121MiB/s (127MB/s), 121MiB/s-121MiB/s (127MB/s-127MB/s), io=242MiB (254MB), run=2006-2006msec 00:19:59.770 WRITE: bw=71.6MiB/s (75.0MB/s), 71.6MiB/s-71.6MiB/s (75.0MB/s-75.0MB/s), io=132MiB (138MB), run=1845-1845msec 00:19:59.770 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:00.027 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:20:00.027 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:00.027 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:20:00.027 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:20:00.027 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:00.027 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:20:00.285 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:00.285 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:20:00.285 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:00.285 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:00.285 rmmod nvme_tcp 00:20:00.285 rmmod nvme_fabrics 00:20:00.285 rmmod nvme_keyring 00:20:00.285 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:00.285 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:20:00.285 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:20:00.285 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 75272 ']' 00:20:00.285 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 75272 00:20:00.285 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 75272 ']' 00:20:00.285 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 75272 00:20:00.285 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:20:00.285 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:00.285 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75272 00:20:00.285 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:00.285 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:00.285 killing process with pid 75272 00:20:00.285 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75272' 00:20:00.285 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 75272 00:20:00.285 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 75272 00:20:00.544 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:00.544 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:00.544 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:00.544 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:20:00.544 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:20:00.544 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:00.544 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:20:00.544 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:00.544 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:00.544 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:00.544 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:00.544 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:00.544 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:00.544 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:00.544 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:00.544 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:00.544 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:00.544 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:00.544 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:00.544 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:00.544 05:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:00.544 05:30:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:00.544 05:30:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:00.544 05:30:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.544 05:30:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:00.544 05:30:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:20:00.803 00:20:00.803 real 0m9.463s 00:20:00.803 user 0m38.263s 00:20:00.803 sys 0m2.601s 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.803 ************************************ 00:20:00.803 END TEST nvmf_fio_host 00:20:00.803 ************************************ 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.803 ************************************ 00:20:00.803 START TEST nvmf_failover 00:20:00.803 ************************************ 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:00.803 * Looking for test storage... 00:20:00.803 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:20:00.803 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:00.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.804 --rc genhtml_branch_coverage=1 00:20:00.804 --rc genhtml_function_coverage=1 00:20:00.804 --rc genhtml_legend=1 00:20:00.804 --rc geninfo_all_blocks=1 00:20:00.804 --rc geninfo_unexecuted_blocks=1 00:20:00.804 00:20:00.804 ' 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:00.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.804 --rc genhtml_branch_coverage=1 00:20:00.804 --rc genhtml_function_coverage=1 00:20:00.804 --rc genhtml_legend=1 00:20:00.804 --rc geninfo_all_blocks=1 00:20:00.804 --rc geninfo_unexecuted_blocks=1 00:20:00.804 00:20:00.804 ' 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:00.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.804 --rc genhtml_branch_coverage=1 00:20:00.804 --rc genhtml_function_coverage=1 00:20:00.804 --rc genhtml_legend=1 00:20:00.804 --rc geninfo_all_blocks=1 00:20:00.804 --rc geninfo_unexecuted_blocks=1 00:20:00.804 00:20:00.804 ' 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:00.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.804 --rc genhtml_branch_coverage=1 00:20:00.804 --rc genhtml_function_coverage=1 00:20:00.804 --rc genhtml_legend=1 00:20:00.804 --rc geninfo_all_blocks=1 00:20:00.804 --rc geninfo_unexecuted_blocks=1 00:20:00.804 00:20:00.804 ' 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:00.804 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:00.804 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:00.805 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:00.805 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:00.805 Cannot find device "nvmf_init_br" 00:20:00.805 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:20:00.805 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:00.805 Cannot find device "nvmf_init_br2" 00:20:00.805 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:20:00.805 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:01.063 Cannot find device "nvmf_tgt_br" 00:20:01.063 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:20:01.063 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:01.063 Cannot find device "nvmf_tgt_br2" 00:20:01.063 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:20:01.063 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:01.063 Cannot find device "nvmf_init_br" 00:20:01.063 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:20:01.063 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:01.063 Cannot find device "nvmf_init_br2" 00:20:01.063 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:20:01.063 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:01.063 Cannot find device "nvmf_tgt_br" 00:20:01.063 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:20:01.063 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:01.063 Cannot find device "nvmf_tgt_br2" 00:20:01.063 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:20:01.063 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:01.063 Cannot find device "nvmf_br" 00:20:01.063 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:20:01.063 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:01.063 Cannot find device "nvmf_init_if" 00:20:01.063 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:20:01.063 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:01.063 Cannot find device "nvmf_init_if2" 00:20:01.063 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:20:01.063 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:01.063 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:01.064 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:20:01.064 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:01.064 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:01.064 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:20:01.064 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:01.064 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:01.064 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:01.064 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:01.064 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:01.064 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:01.064 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:01.064 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:01.064 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:01.064 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:01.064 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:01.064 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:01.064 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:01.064 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:01.064 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:01.064 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:01.064 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:01.064 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:01.064 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:01.064 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:01.064 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:01.064 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:01.064 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:01.064 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:01.373 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:01.373 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:20:01.373 00:20:01.373 --- 10.0.0.3 ping statistics --- 00:20:01.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.373 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:01.373 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:01.373 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:20:01.373 00:20:01.373 --- 10.0.0.4 ping statistics --- 00:20:01.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.373 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:01.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:01.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:20:01.373 00:20:01.373 --- 10.0.0.1 ping statistics --- 00:20:01.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.373 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:01.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:01.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:20:01.373 00:20:01.373 --- 10.0.0.2 ping statistics --- 00:20:01.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.373 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=75673 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 75673 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 75673 ']' 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:01.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:01.373 05:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:01.373 [2024-11-20 05:30:15.753933] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:20:01.373 [2024-11-20 05:30:15.754049] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.631 [2024-11-20 05:30:15.923069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:01.631 [2024-11-20 05:30:15.963461] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.631 [2024-11-20 05:30:15.963723] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.631 [2024-11-20 05:30:15.963840] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.631 [2024-11-20 05:30:15.963958] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.631 [2024-11-20 05:30:15.964049] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.631 [2024-11-20 05:30:15.964947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.631 [2024-11-20 05:30:15.965006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:01.631 [2024-11-20 05:30:15.965017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.631 [2024-11-20 05:30:15.996295] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:01.631 05:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:01.631 05:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:20:01.631 05:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:01.631 05:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:01.631 05:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:01.631 05:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.631 05:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:01.888 [2024-11-20 05:30:16.313774] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.888 05:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:02.144 Malloc0 00:20:02.144 05:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:02.709 05:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:02.967 05:30:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:03.532 [2024-11-20 05:30:17.763866] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:03.532 05:30:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:03.791 [2024-11-20 05:30:18.200314] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:03.791 05:30:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:20:04.357 [2024-11-20 05:30:18.572733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:20:04.357 05:30:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75729 00:20:04.357 05:30:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:20:04.357 05:30:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:04.357 05:30:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75729 /var/tmp/bdevperf.sock 00:20:04.357 05:30:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 75729 ']' 00:20:04.357 05:30:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.357 05:30:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:04.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.357 05:30:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.357 05:30:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:04.357 05:30:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:04.616 05:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:04.616 05:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:20:04.616 05:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:05.181 NVMe0n1 00:20:05.181 05:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:05.748 00:20:05.748 05:30:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75745 00:20:05.748 05:30:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:05.748 05:30:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:20:06.697 05:30:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:06.955 [2024-11-20 05:30:21.306435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcad30 is same with the state(6) to be set 00:20:06.955 05:30:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:20:10.237 05:30:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:10.237 00:20:10.237 05:30:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:10.813 05:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:20:14.095 05:30:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:14.095 [2024-11-20 05:30:28.344857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:14.095 05:30:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:20:15.030 05:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:20:15.288 05:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75745 00:20:21.854 { 00:20:21.854 "results": [ 00:20:21.854 { 00:20:21.854 "job": "NVMe0n1", 00:20:21.854 "core_mask": "0x1", 00:20:21.854 "workload": "verify", 00:20:21.854 "status": "finished", 00:20:21.854 "verify_range": { 00:20:21.854 "start": 0, 00:20:21.854 "length": 16384 00:20:21.854 }, 00:20:21.854 "queue_depth": 128, 00:20:21.854 "io_size": 4096, 00:20:21.854 "runtime": 15.009737, 00:20:21.854 "iops": 8365.769500158463, 00:20:21.854 "mibps": 32.678787109994, 00:20:21.854 "io_failed": 3181, 00:20:21.854 "io_timeout": 0, 00:20:21.854 "avg_latency_us": 14887.775946743453, 00:20:21.854 "min_latency_us": 700.0436363636363, 00:20:21.854 "max_latency_us": 18707.54909090909 00:20:21.854 } 00:20:21.854 ], 00:20:21.854 "core_count": 1 00:20:21.854 } 00:20:21.854 05:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75729 00:20:21.854 05:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 75729 ']' 00:20:21.854 05:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 75729 00:20:21.854 05:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:20:21.854 05:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:21.854 05:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75729 00:20:21.854 killing process with pid 75729 00:20:21.854 05:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:21.854 05:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:21.854 05:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75729' 00:20:21.854 05:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 75729 00:20:21.854 05:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 75729 00:20:21.854 05:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:21.854 [2024-11-20 05:30:18.641149] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:20:21.854 [2024-11-20 05:30:18.641253] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75729 ] 00:20:21.854 [2024-11-20 05:30:18.801800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.854 [2024-11-20 05:30:18.848999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.854 [2024-11-20 05:30:18.879158] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:21.854 Running I/O for 15 seconds... 00:20:21.854 8448.00 IOPS, 33.00 MiB/s [2024-11-20T05:30:36.367Z] [2024-11-20 05:30:21.306914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.854 [2024-11-20 05:30:21.306969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.854 [2024-11-20 05:30:21.307000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.854 [2024-11-20 05:30:21.307017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.854 [2024-11-20 05:30:21.307034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.854 [2024-11-20 05:30:21.307049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.854 [2024-11-20 05:30:21.307066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.854 [2024-11-20 05:30:21.307080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.854 [2024-11-20 05:30:21.307096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.854 [2024-11-20 05:30:21.307110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.854 [2024-11-20 05:30:21.307126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.854 [2024-11-20 05:30:21.307140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.854 [2024-11-20 05:30:21.307156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.854 [2024-11-20 05:30:21.307171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.854 [2024-11-20 05:30:21.307186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.854 [2024-11-20 05:30:21.307200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.854 [2024-11-20 05:30:21.307216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.854 [2024-11-20 05:30:21.307230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.854 [2024-11-20 05:30:21.307246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:75272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.854 [2024-11-20 05:30:21.307260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.854 [2024-11-20 05:30:21.307276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.854 [2024-11-20 05:30:21.307321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.854 [2024-11-20 05:30:21.307339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.854 [2024-11-20 05:30:21.307353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.854 [2024-11-20 05:30:21.307369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.854 [2024-11-20 05:30:21.307383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.854 [2024-11-20 05:30:21.307399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.854 [2024-11-20 05:30:21.307413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.854 [2024-11-20 05:30:21.307429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.854 [2024-11-20 05:30:21.307443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.854 [2024-11-20 05:30:21.307459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.854 [2024-11-20 05:30:21.307473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.854 [2024-11-20 05:30:21.307489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.854 [2024-11-20 05:30:21.307503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.854 [2024-11-20 05:30:21.307518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:74824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.854 [2024-11-20 05:30:21.307532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.854 [2024-11-20 05:30:21.307548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.854 [2024-11-20 05:30:21.307562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.854 [2024-11-20 05:30:21.307579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.854 [2024-11-20 05:30:21.307592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.854 [2024-11-20 05:30:21.307608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:74848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.854 [2024-11-20 05:30:21.307622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.854 [2024-11-20 05:30:21.307639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:74856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.854 [2024-11-20 05:30:21.307653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.854 [2024-11-20 05:30:21.307668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:74864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.855 [2024-11-20 05:30:21.307683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.307707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:74872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.855 [2024-11-20 05:30:21.307722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.307738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:74880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.855 [2024-11-20 05:30:21.307752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.307786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.855 [2024-11-20 05:30:21.307811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.307828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.855 [2024-11-20 05:30:21.307842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.307858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.855 [2024-11-20 05:30:21.307872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.307889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.855 [2024-11-20 05:30:21.307917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.307936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.855 [2024-11-20 05:30:21.307950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.307966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.855 [2024-11-20 05:30:21.307981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.307997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.855 [2024-11-20 05:30:21.308011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.855 [2024-11-20 05:30:21.308041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.855 [2024-11-20 05:30:21.308071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.855 [2024-11-20 05:30:21.308101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.855 [2024-11-20 05:30:21.308141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:74912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.855 [2024-11-20 05:30:21.308172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.855 [2024-11-20 05:30:21.308203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.855 [2024-11-20 05:30:21.308232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:74936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.855 [2024-11-20 05:30:21.308262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.855 [2024-11-20 05:30:21.308292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:74952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.855 [2024-11-20 05:30:21.308325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.855 [2024-11-20 05:30:21.308356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.855 [2024-11-20 05:30:21.308385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.855 [2024-11-20 05:30:21.308415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:74984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.855 [2024-11-20 05:30:21.308445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.855 [2024-11-20 05:30:21.308474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.855 [2024-11-20 05:30:21.308504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.855 [2024-11-20 05:30:21.308545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.855 [2024-11-20 05:30:21.308577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.855 [2024-11-20 05:30:21.308607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.855 [2024-11-20 05:30:21.308637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.855 [2024-11-20 05:30:21.308667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.855 [2024-11-20 05:30:21.308696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.855 [2024-11-20 05:30:21.308725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.855 [2024-11-20 05:30:21.308755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.855 [2024-11-20 05:30:21.308785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.855 [2024-11-20 05:30:21.308817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.855 [2024-11-20 05:30:21.308848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.855 [2024-11-20 05:30:21.308877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.855 [2024-11-20 05:30:21.308919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.855 [2024-11-20 05:30:21.308959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.855 [2024-11-20 05:30:21.308975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.308989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.309018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.309048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.309078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.309107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.309137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.309166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.309196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.309226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.309255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.309285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.309324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.856 [2024-11-20 05:30:21.309355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.856 [2024-11-20 05:30:21.309385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.856 [2024-11-20 05:30:21.309415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.856 [2024-11-20 05:30:21.309444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.856 [2024-11-20 05:30:21.309474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.856 [2024-11-20 05:30:21.309504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.856 [2024-11-20 05:30:21.309533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:75144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.856 [2024-11-20 05:30:21.309564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.309593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.309623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.309652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.309681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.309718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.309747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.309777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.309809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.309839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.309869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.309899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.309941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.309971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.309987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.310001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.310016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.310030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.310046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.310060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.310075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.310089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.310113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.310127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.310143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.310157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.310173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.856 [2024-11-20 05:30:21.310186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.856 [2024-11-20 05:30:21.310202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.857 [2024-11-20 05:30:21.310216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.857 [2024-11-20 05:30:21.310232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.857 [2024-11-20 05:30:21.310246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.857 [2024-11-20 05:30:21.310261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.857 [2024-11-20 05:30:21.310280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.857 [2024-11-20 05:30:21.310297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.857 [2024-11-20 05:30:21.310313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.857 [2024-11-20 05:30:21.310329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.857 [2024-11-20 05:30:21.310342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.857 [2024-11-20 05:30:21.310358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.857 [2024-11-20 05:30:21.310372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.857 [2024-11-20 05:30:21.310388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.857 [2024-11-20 05:30:21.310402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.857 [2024-11-20 05:30:21.310417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3d50 is same with the state(6) to be set 00:20:21.857 [2024-11-20 05:30:21.310435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.857 [2024-11-20 05:30:21.310446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.857 [2024-11-20 05:30:21.310457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75208 len:8 PRP1 0x0 PRP2 0x0 00:20:21.857 [2024-11-20 05:30:21.310471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.857 [2024-11-20 05:30:21.310486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.857 [2024-11-20 05:30:21.310503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.857 [2024-11-20 05:30:21.310514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75632 len:8 PRP1 0x0 PRP2 0x0 00:20:21.857 [2024-11-20 05:30:21.310528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.857 [2024-11-20 05:30:21.310542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.857 [2024-11-20 05:30:21.310552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.857 [2024-11-20 05:30:21.310562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75640 len:8 PRP1 0x0 PRP2 0x0 00:20:21.857 [2024-11-20 05:30:21.310576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.857 [2024-11-20 05:30:21.310590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.857 [2024-11-20 05:30:21.310600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.857 [2024-11-20 05:30:21.310610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75648 len:8 PRP1 0x0 PRP2 0x0 00:20:21.857 [2024-11-20 05:30:21.310623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.857 [2024-11-20 05:30:21.310637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.857 [2024-11-20 05:30:21.310647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.857 [2024-11-20 05:30:21.310657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75656 len:8 PRP1 0x0 PRP2 0x0 00:20:21.857 [2024-11-20 05:30:21.310671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.857 [2024-11-20 05:30:21.310684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.857 [2024-11-20 05:30:21.310694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.857 [2024-11-20 05:30:21.310707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75664 len:8 PRP1 0x0 PRP2 0x0 00:20:21.857 [2024-11-20 05:30:21.310721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.857 [2024-11-20 05:30:21.310736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.857 [2024-11-20 05:30:21.310747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.857 [2024-11-20 05:30:21.310758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75672 len:8 PRP1 0x0 PRP2 0x0 00:20:21.857 [2024-11-20 05:30:21.310771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.857 [2024-11-20 05:30:21.310785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.857 [2024-11-20 05:30:21.310795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.857 [2024-11-20 05:30:21.310805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75680 len:8 PRP1 0x0 PRP2 0x0 00:20:21.857 [2024-11-20 05:30:21.310819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.857 [2024-11-20 05:30:21.310832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.857 [2024-11-20 05:30:21.310842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.857 [2024-11-20 05:30:21.310853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75688 len:8 PRP1 0x0 PRP2 0x0 00:20:21.857 [2024-11-20 05:30:21.310866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.857 [2024-11-20 05:30:21.310885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.857 [2024-11-20 05:30:21.310896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.857 [2024-11-20 05:30:21.310918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75696 len:8 PRP1 0x0 PRP2 0x0 00:20:21.857 [2024-11-20 05:30:21.310932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.857 [2024-11-20 05:30:21.310946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.857 [2024-11-20 05:30:21.310956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.857 [2024-11-20 05:30:21.310966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75704 len:8 PRP1 0x0 PRP2 0x0 00:20:21.857 [2024-11-20 05:30:21.310980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.857 [2024-11-20 05:30:21.310993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.857 [2024-11-20 05:30:21.311003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.857 [2024-11-20 05:30:21.311013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75712 len:8 PRP1 0x0 PRP2 0x0 00:20:21.857 [2024-11-20 05:30:21.311026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.857 [2024-11-20 05:30:21.311040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.857 [2024-11-20 05:30:21.311050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.857 [2024-11-20 05:30:21.311060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75720 len:8 PRP1 0x0 PRP2 0x0 00:20:21.857 [2024-11-20 05:30:21.311073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.857 [2024-11-20 05:30:21.311087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.857 [2024-11-20 05:30:21.311096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.857 [2024-11-20 05:30:21.311108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75728 len:8 PRP1 0x0 PRP2 0x0 00:20:21.857 [2024-11-20 05:30:21.311121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.857 [2024-11-20 05:30:21.311136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.857 [2024-11-20 05:30:21.311146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.857 [2024-11-20 05:30:21.311157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75736 len:8 PRP1 0x0 PRP2 0x0 00:20:21.857 [2024-11-20 05:30:21.311179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.857 [2024-11-20 05:30:21.311193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.857 [2024-11-20 05:30:21.311203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.857 [2024-11-20 05:30:21.311213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75744 len:8 PRP1 0x0 PRP2 0x0 00:20:21.857 [2024-11-20 05:30:21.311226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.857 [2024-11-20 05:30:21.311240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.857 [2024-11-20 05:30:21.311249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.857 [2024-11-20 05:30:21.311260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75752 len:8 PRP1 0x0 PRP2 0x0 00:20:21.857 [2024-11-20 05:30:21.311280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.857 [2024-11-20 05:30:21.311294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.857 [2024-11-20 05:30:21.311304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.857 [2024-11-20 05:30:21.311314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75760 len:8 PRP1 0x0 PRP2 0x0 00:20:21.857 [2024-11-20 05:30:21.311327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.857 [2024-11-20 05:30:21.311341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.857 [2024-11-20 05:30:21.311351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.857 [2024-11-20 05:30:21.311361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75768 len:8 PRP1 0x0 PRP2 0x0 00:20:21.858 [2024-11-20 05:30:21.311374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.858 [2024-11-20 05:30:21.311426] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:20:21.858 [2024-11-20 05:30:21.311487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:21.858 [2024-11-20 05:30:21.311509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.858 [2024-11-20 05:30:21.311525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:21.858 [2024-11-20 05:30:21.311538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.858 [2024-11-20 05:30:21.311553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:21.858 [2024-11-20 05:30:21.311566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.858 [2024-11-20 05:30:21.311581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:21.858 [2024-11-20 05:30:21.311594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.858 [2024-11-20 05:30:21.311608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:21.858 [2024-11-20 05:30:21.311672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x757710 (9): Bad file descriptor 00:20:21.858 [2024-11-20 05:30:21.315622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:21.858 [2024-11-20 05:30:21.352310] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:20:21.858 8399.50 IOPS, 32.81 MiB/s [2024-11-20T05:30:36.371Z] 8527.67 IOPS, 33.31 MiB/s [2024-11-20T05:30:36.371Z] 8579.75 IOPS, 33.51 MiB/s [2024-11-20T05:30:36.371Z] [2024-11-20 05:30:25.040887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.858 [2024-11-20 05:30:25.040971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.858 [2024-11-20 05:30:25.041002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.858 [2024-11-20 05:30:25.041020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.858 [2024-11-20 05:30:25.041062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.858 [2024-11-20 05:30:25.041078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.858 [2024-11-20 05:30:25.041094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.858 [2024-11-20 05:30:25.041109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.858 [2024-11-20 05:30:25.041125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.858 [2024-11-20 05:30:25.041139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.858 [2024-11-20 05:30:25.041155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.858 [2024-11-20 05:30:25.041170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.858 [2024-11-20 05:30:25.041186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.858 [2024-11-20 05:30:25.041200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.858 [2024-11-20 05:30:25.041216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.858 [2024-11-20 05:30:25.041230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.858 [2024-11-20 05:30:25.041246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.858 [2024-11-20 05:30:25.041259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.858 [2024-11-20 05:30:25.041275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.858 [2024-11-20 05:30:25.041289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.858 [2024-11-20 05:30:25.041305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.858 [2024-11-20 05:30:25.041319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.858 [2024-11-20 05:30:25.041335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.858 [2024-11-20 05:30:25.041349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.858 [2024-11-20 05:30:25.041365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.858 [2024-11-20 05:30:25.041380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.858 [2024-11-20 05:30:25.041396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.858 [2024-11-20 05:30:25.041410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.858 [2024-11-20 05:30:25.041426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.858 [2024-11-20 05:30:25.041440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.858 [2024-11-20 05:30:25.041464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.858 [2024-11-20 05:30:25.041479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.858 [2024-11-20 05:30:25.041496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.858 [2024-11-20 05:30:25.041509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.858 [2024-11-20 05:30:25.041526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.858 [2024-11-20 05:30:25.041541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.858 [2024-11-20 05:30:25.041557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.858 [2024-11-20 05:30:25.041572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.858 [2024-11-20 05:30:25.041587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.858 [2024-11-20 05:30:25.041601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.858 [2024-11-20 05:30:25.041618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.858 [2024-11-20 05:30:25.041632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.858 [2024-11-20 05:30:25.041647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.858 [2024-11-20 05:30:25.041661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.858 [2024-11-20 05:30:25.041677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.858 [2024-11-20 05:30:25.041691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.858 [2024-11-20 05:30:25.041707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.859 [2024-11-20 05:30:25.041721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.041736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.859 [2024-11-20 05:30:25.041750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.041775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.859 [2024-11-20 05:30:25.041799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.041816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.859 [2024-11-20 05:30:25.041831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.041846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.859 [2024-11-20 05:30:25.041870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.041886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.859 [2024-11-20 05:30:25.041915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.041935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.859 [2024-11-20 05:30:25.041949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.041966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.859 [2024-11-20 05:30:25.041980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.041997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.859 [2024-11-20 05:30:25.042010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.042026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.859 [2024-11-20 05:30:25.042040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.042057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.859 [2024-11-20 05:30:25.042072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.042088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.859 [2024-11-20 05:30:25.042102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.042118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.859 [2024-11-20 05:30:25.042132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.042148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.859 [2024-11-20 05:30:25.042162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.042178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.859 [2024-11-20 05:30:25.042192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.042208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.859 [2024-11-20 05:30:25.042222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.042238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.859 [2024-11-20 05:30:25.042252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.042275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.859 [2024-11-20 05:30:25.042290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.042306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.859 [2024-11-20 05:30:25.042320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.042336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.859 [2024-11-20 05:30:25.042350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.042366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.859 [2024-11-20 05:30:25.042380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.042396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.859 [2024-11-20 05:30:25.042410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.042426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.859 [2024-11-20 05:30:25.042440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.042457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.859 [2024-11-20 05:30:25.042471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.042487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.859 [2024-11-20 05:30:25.042501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.042517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.859 [2024-11-20 05:30:25.042531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.042547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.859 [2024-11-20 05:30:25.042563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.042579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.859 [2024-11-20 05:30:25.042593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.042609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.859 [2024-11-20 05:30:25.042623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.042639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.859 [2024-11-20 05:30:25.042659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.042676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.859 [2024-11-20 05:30:25.042690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.042706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.859 [2024-11-20 05:30:25.042720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.042736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.859 [2024-11-20 05:30:25.042750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.042774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.859 [2024-11-20 05:30:25.042796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.042813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.859 [2024-11-20 05:30:25.042826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.042842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.859 [2024-11-20 05:30:25.042856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.042872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.859 [2024-11-20 05:30:25.042886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.859 [2024-11-20 05:30:25.042913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.860 [2024-11-20 05:30:25.042930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.042946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.860 [2024-11-20 05:30:25.042960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.042976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.860 [2024-11-20 05:30:25.042990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.860 [2024-11-20 05:30:25.043020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.860 [2024-11-20 05:30:25.043050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.860 [2024-11-20 05:30:25.043089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.860 [2024-11-20 05:30:25.043120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.860 [2024-11-20 05:30:25.043151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.860 [2024-11-20 05:30:25.043180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.860 [2024-11-20 05:30:25.043210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.860 [2024-11-20 05:30:25.043239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.860 [2024-11-20 05:30:25.043269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.860 [2024-11-20 05:30:25.043299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.860 [2024-11-20 05:30:25.043329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.860 [2024-11-20 05:30:25.043359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.860 [2024-11-20 05:30:25.043389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.860 [2024-11-20 05:30:25.043419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.860 [2024-11-20 05:30:25.043449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.860 [2024-11-20 05:30:25.043486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.860 [2024-11-20 05:30:25.043516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.860 [2024-11-20 05:30:25.043546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.860 [2024-11-20 05:30:25.043577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.860 [2024-11-20 05:30:25.043607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.860 [2024-11-20 05:30:25.043637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.860 [2024-11-20 05:30:25.043666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.860 [2024-11-20 05:30:25.043696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.860 [2024-11-20 05:30:25.043726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.860 [2024-11-20 05:30:25.043757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.860 [2024-11-20 05:30:25.043818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.860 [2024-11-20 05:30:25.043848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.860 [2024-11-20 05:30:25.043888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.860 [2024-11-20 05:30:25.043933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.860 [2024-11-20 05:30:25.043963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.043979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.860 [2024-11-20 05:30:25.043993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.860 [2024-11-20 05:30:25.044009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.860 [2024-11-20 05:30:25.044023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.861 [2024-11-20 05:30:25.044039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.861 [2024-11-20 05:30:25.044053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.861 [2024-11-20 05:30:25.044069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.861 [2024-11-20 05:30:25.044083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.861 [2024-11-20 05:30:25.044099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.861 [2024-11-20 05:30:25.044113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.861 [2024-11-20 05:30:25.044129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.861 [2024-11-20 05:30:25.044144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.861 [2024-11-20 05:30:25.044160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.861 [2024-11-20 05:30:25.044174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.861 [2024-11-20 05:30:25.044190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.861 [2024-11-20 05:30:25.044204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.861 [2024-11-20 05:30:25.044220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.861 [2024-11-20 05:30:25.044234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.861 [2024-11-20 05:30:25.044249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.861 [2024-11-20 05:30:25.044264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.861 [2024-11-20 05:30:25.044287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.861 [2024-11-20 05:30:25.044302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.861 [2024-11-20 05:30:25.044318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.861 [2024-11-20 05:30:25.044332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.861 [2024-11-20 05:30:25.044347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.861 [2024-11-20 05:30:25.044361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.861 [2024-11-20 05:30:25.044377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.861 [2024-11-20 05:30:25.044392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.861 [2024-11-20 05:30:25.044407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.861 [2024-11-20 05:30:25.044421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.861 [2024-11-20 05:30:25.044437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.861 [2024-11-20 05:30:25.044451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.861 [2024-11-20 05:30:25.044467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.861 [2024-11-20 05:30:25.044481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.861 [2024-11-20 05:30:25.044497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.861 [2024-11-20 05:30:25.044510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.861 [2024-11-20 05:30:25.044526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5f00 is same with the state(6) to be set 00:20:21.861 [2024-11-20 05:30:25.044543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.861 [2024-11-20 05:30:25.044554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.861 [2024-11-20 05:30:25.044565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77856 len:8 PRP1 0x0 PRP2 0x0 00:20:21.861 [2024-11-20 05:30:25.044579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.861 [2024-11-20 05:30:25.044595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.861 [2024-11-20 05:30:25.044605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.861 [2024-11-20 05:30:25.044616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78312 len:8 PRP1 0x0 PRP2 0x0 00:20:21.861 [2024-11-20 05:30:25.044629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.861 [2024-11-20 05:30:25.044643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.861 [2024-11-20 05:30:25.044653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.861 [2024-11-20 05:30:25.044664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78320 len:8 PRP1 0x0 PRP2 0x0 00:20:21.861 [2024-11-20 05:30:25.044685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.861 [2024-11-20 05:30:25.044699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.861 [2024-11-20 05:30:25.044709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.861 [2024-11-20 05:30:25.044720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78328 len:8 PRP1 0x0 PRP2 0x0 00:20:21.861 [2024-11-20 05:30:25.044733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.861 [2024-11-20 05:30:25.044747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.861 [2024-11-20 05:30:25.044759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.861 [2024-11-20 05:30:25.044778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78336 len:8 PRP1 0x0 PRP2 0x0 00:20:21.861 [2024-11-20 05:30:25.044797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.861 [2024-11-20 05:30:25.044812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.861 [2024-11-20 05:30:25.044822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.861 [2024-11-20 05:30:25.044833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78344 len:8 PRP1 0x0 PRP2 0x0 00:20:21.861 [2024-11-20 05:30:25.044846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.861 [2024-11-20 05:30:25.044860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.861 [2024-11-20 05:30:25.044870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.861 [2024-11-20 05:30:25.044881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78352 len:8 PRP1 0x0 PRP2 0x0 00:20:21.861 [2024-11-20 05:30:25.044894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.861 [2024-11-20 05:30:25.044921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.861 [2024-11-20 05:30:25.044932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.861 [2024-11-20 05:30:25.044942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78360 len:8 PRP1 0x0 PRP2 0x0 00:20:21.861 [2024-11-20 05:30:25.044956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.861 [2024-11-20 05:30:25.044969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.861 [2024-11-20 05:30:25.044979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.861 [2024-11-20 05:30:25.044990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78368 len:8 PRP1 0x0 PRP2 0x0 00:20:21.861 [2024-11-20 05:30:25.045003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.861 [2024-11-20 05:30:25.045018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.861 [2024-11-20 05:30:25.045029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.861 [2024-11-20 05:30:25.045039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78376 len:8 PRP1 0x0 PRP2 0x0 00:20:21.861 [2024-11-20 05:30:25.045052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.861 [2024-11-20 05:30:25.045066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.861 [2024-11-20 05:30:25.045076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.861 [2024-11-20 05:30:25.045095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78384 len:8 PRP1 0x0 PRP2 0x0 00:20:21.861 [2024-11-20 05:30:25.045110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.861 [2024-11-20 05:30:25.045124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.861 [2024-11-20 05:30:25.045134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.861 [2024-11-20 05:30:25.045145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78392 len:8 PRP1 0x0 PRP2 0x0 00:20:21.861 [2024-11-20 05:30:25.045158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.861 [2024-11-20 05:30:25.045172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.861 [2024-11-20 05:30:25.045182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.861 [2024-11-20 05:30:25.045193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78400 len:8 PRP1 0x0 PRP2 0x0 00:20:21.861 [2024-11-20 05:30:25.045206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.861 [2024-11-20 05:30:25.045220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.862 [2024-11-20 05:30:25.045230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.862 [2024-11-20 05:30:25.045240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78408 len:8 PRP1 0x0 PRP2 0x0 00:20:21.862 [2024-11-20 05:30:25.045253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.862 [2024-11-20 05:30:25.045267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.862 [2024-11-20 05:30:25.045277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.862 [2024-11-20 05:30:25.045288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78416 len:8 PRP1 0x0 PRP2 0x0 00:20:21.862 [2024-11-20 05:30:25.045301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.862 [2024-11-20 05:30:25.045315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.862 [2024-11-20 05:30:25.045325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.862 [2024-11-20 05:30:25.045335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78424 len:8 PRP1 0x0 PRP2 0x0 00:20:21.862 [2024-11-20 05:30:25.045349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.862 [2024-11-20 05:30:25.045362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.862 [2024-11-20 05:30:25.045373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.862 [2024-11-20 05:30:25.045384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78432 len:8 PRP1 0x0 PRP2 0x0 00:20:21.862 [2024-11-20 05:30:25.045397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.862 [2024-11-20 05:30:25.045451] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:20:21.862 [2024-11-20 05:30:25.045515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:21.862 [2024-11-20 05:30:25.045537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.862 [2024-11-20 05:30:25.045554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:21.862 [2024-11-20 05:30:25.045583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.862 [2024-11-20 05:30:25.045599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:21.862 [2024-11-20 05:30:25.045613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.862 [2024-11-20 05:30:25.045627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:21.862 [2024-11-20 05:30:25.045641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.862 [2024-11-20 05:30:25.045655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:21.862 [2024-11-20 05:30:25.049960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:21.862 [2024-11-20 05:30:25.050022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x757710 (9): Bad file descriptor 00:20:21.862 [2024-11-20 05:30:25.074384] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:20:21.862 8465.00 IOPS, 33.07 MiB/s [2024-11-20T05:30:36.375Z] 8312.83 IOPS, 32.47 MiB/s [2024-11-20T05:30:36.375Z] 8391.57 IOPS, 32.78 MiB/s [2024-11-20T05:30:36.375Z] 8441.50 IOPS, 32.97 MiB/s [2024-11-20T05:30:36.375Z] 8435.11 IOPS, 32.95 MiB/s [2024-11-20T05:30:36.375Z] [2024-11-20 05:30:29.640041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:128400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.862 [2024-11-20 05:30:29.640122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.862 [2024-11-20 05:30:29.640154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.862 [2024-11-20 05:30:29.640171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.862 [2024-11-20 05:30:29.640188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.862 [2024-11-20 05:30:29.640203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.862 [2024-11-20 05:30:29.640219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.862 [2024-11-20 05:30:29.640232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.862 [2024-11-20 05:30:29.640248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.862 [2024-11-20 05:30:29.640263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.862 [2024-11-20 05:30:29.640280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.862 [2024-11-20 05:30:29.640294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.862 [2024-11-20 05:30:29.640309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.862 [2024-11-20 05:30:29.640323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.862 [2024-11-20 05:30:29.640339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.862 [2024-11-20 05:30:29.640382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.862 [2024-11-20 05:30:29.640401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.862 [2024-11-20 05:30:29.640415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.862 [2024-11-20 05:30:29.640431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.862 [2024-11-20 05:30:29.640445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.862 [2024-11-20 05:30:29.640461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.862 [2024-11-20 05:30:29.640475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.862 [2024-11-20 05:30:29.640490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.862 [2024-11-20 05:30:29.640504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.862 [2024-11-20 05:30:29.640519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.862 [2024-11-20 05:30:29.640542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.862 [2024-11-20 05:30:29.640559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.862 [2024-11-20 05:30:29.640572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.862 [2024-11-20 05:30:29.640588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.862 [2024-11-20 05:30:29.640602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.862 [2024-11-20 05:30:29.640618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.862 [2024-11-20 05:30:29.640631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.862 [2024-11-20 05:30:29.640648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:127952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.862 [2024-11-20 05:30:29.640662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.862 [2024-11-20 05:30:29.640679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:127960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.862 [2024-11-20 05:30:29.640693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.862 [2024-11-20 05:30:29.640709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:127968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.862 [2024-11-20 05:30:29.640723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.640739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:127976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.863 [2024-11-20 05:30:29.640752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.640777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:127984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.863 [2024-11-20 05:30:29.640792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.640808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:127992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.863 [2024-11-20 05:30:29.640822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.640838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:128000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.863 [2024-11-20 05:30:29.640852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.640868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.863 [2024-11-20 05:30:29.640882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.640898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.863 [2024-11-20 05:30:29.640927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.640944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.863 [2024-11-20 05:30:29.640958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.640975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.863 [2024-11-20 05:30:29.640989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.641005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.863 [2024-11-20 05:30:29.641018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.641034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.863 [2024-11-20 05:30:29.641048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.641064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.863 [2024-11-20 05:30:29.641078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.641093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.863 [2024-11-20 05:30:29.641107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.641123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.863 [2024-11-20 05:30:29.641136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.641153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.863 [2024-11-20 05:30:29.641166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.641192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:128024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.863 [2024-11-20 05:30:29.641216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.641233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:128032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.863 [2024-11-20 05:30:29.641247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.641263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.863 [2024-11-20 05:30:29.641277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.641293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.863 [2024-11-20 05:30:29.641307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.641323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.863 [2024-11-20 05:30:29.641337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.641353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.863 [2024-11-20 05:30:29.641366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.641382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.863 [2024-11-20 05:30:29.641396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.641412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.863 [2024-11-20 05:30:29.641426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.641442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.863 [2024-11-20 05:30:29.641455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.641471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.863 [2024-11-20 05:30:29.641485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.641501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.863 [2024-11-20 05:30:29.641515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.641530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.863 [2024-11-20 05:30:29.641544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.641560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.863 [2024-11-20 05:30:29.641580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.641597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.863 [2024-11-20 05:30:29.641611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.641627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.863 [2024-11-20 05:30:29.641641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.641657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.863 [2024-11-20 05:30:29.641671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.863 [2024-11-20 05:30:29.641688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.864 [2024-11-20 05:30:29.641702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.641718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.864 [2024-11-20 05:30:29.641732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.641748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.864 [2024-11-20 05:30:29.641762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.641778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.864 [2024-11-20 05:30:29.641791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.641807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.864 [2024-11-20 05:30:29.641821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.641837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.864 [2024-11-20 05:30:29.641851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.641867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.864 [2024-11-20 05:30:29.641881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.641896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.864 [2024-11-20 05:30:29.641923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.641940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.864 [2024-11-20 05:30:29.641954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.641977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.864 [2024-11-20 05:30:29.641992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.642008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.864 [2024-11-20 05:30:29.642021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.642037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.864 [2024-11-20 05:30:29.642051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.642067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.864 [2024-11-20 05:30:29.642081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.642097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.864 [2024-11-20 05:30:29.642110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.642127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.864 [2024-11-20 05:30:29.642141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.642156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.864 [2024-11-20 05:30:29.642170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.642187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.864 [2024-11-20 05:30:29.642201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.642217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.864 [2024-11-20 05:30:29.642231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.642247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.864 [2024-11-20 05:30:29.642261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.642276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.864 [2024-11-20 05:30:29.642290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.642306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.864 [2024-11-20 05:30:29.642320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.642336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.864 [2024-11-20 05:30:29.642356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.642373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.864 [2024-11-20 05:30:29.642387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.642402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.864 [2024-11-20 05:30:29.642416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.642433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.864 [2024-11-20 05:30:29.642447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.642463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.864 [2024-11-20 05:30:29.642477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.642493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.864 [2024-11-20 05:30:29.642507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.642523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.864 [2024-11-20 05:30:29.642537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.642553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.864 [2024-11-20 05:30:29.642567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.642583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.864 [2024-11-20 05:30:29.642597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.642613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.864 [2024-11-20 05:30:29.642627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.864 [2024-11-20 05:30:29.642642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.864 [2024-11-20 05:30:29.642656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.865 [2024-11-20 05:30:29.642673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.865 [2024-11-20 05:30:29.642687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.865 [2024-11-20 05:30:29.642704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.865 [2024-11-20 05:30:29.642717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.865 [2024-11-20 05:30:29.642740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.865 [2024-11-20 05:30:29.642755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.865 [2024-11-20 05:30:29.642771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.865 [2024-11-20 05:30:29.642785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.865 [2024-11-20 05:30:29.642802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.865 [2024-11-20 05:30:29.642816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.865 [2024-11-20 05:30:29.642832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.865 [2024-11-20 05:30:29.642846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.865 [2024-11-20 05:30:29.642862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.865 [2024-11-20 05:30:29.642876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.865 [2024-11-20 05:30:29.642892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.865 [2024-11-20 05:30:29.642917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.865 [2024-11-20 05:30:29.642935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.865 [2024-11-20 05:30:29.642949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.865 [2024-11-20 05:30:29.642965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.865 [2024-11-20 05:30:29.642979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.865 [2024-11-20 05:30:29.642995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.865 [2024-11-20 05:30:29.643009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.865 [2024-11-20 05:30:29.643025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.865 [2024-11-20 05:30:29.643039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.865 [2024-11-20 05:30:29.643055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.865 [2024-11-20 05:30:29.643069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.865 [2024-11-20 05:30:29.643085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.865 [2024-11-20 05:30:29.643099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.865 [2024-11-20 05:30:29.643116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.865 [2024-11-20 05:30:29.643137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.865 [2024-11-20 05:30:29.643154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.865 [2024-11-20 05:30:29.643168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.865 [2024-11-20 05:30:29.643185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.865 [2024-11-20 05:30:29.643199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.865 [2024-11-20 05:30:29.643215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.865 [2024-11-20 05:30:29.643229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.865 [2024-11-20 05:30:29.643245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.865 [2024-11-20 05:30:29.643259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.865 [2024-11-20 05:30:29.643275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.865 [2024-11-20 05:30:29.643289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.865 [2024-11-20 05:30:29.643305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.865 [2024-11-20 05:30:29.643319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.865 [2024-11-20 05:30:29.643335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.865 [2024-11-20 05:30:29.643349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.865 [2024-11-20 05:30:29.643364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.865 [2024-11-20 05:30:29.643378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.865 [2024-11-20 05:30:29.643395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.865 [2024-11-20 05:30:29.643409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.865 [2024-11-20 05:30:29.643425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.865 [2024-11-20 05:30:29.643439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.865 [2024-11-20 05:30:29.643455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.865 [2024-11-20 05:30:29.643469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.865 [2024-11-20 05:30:29.643486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.865 [2024-11-20 05:30:29.643499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.865 [2024-11-20 05:30:29.643516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.865 [2024-11-20 05:30:29.643536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.866 [2024-11-20 05:30:29.643552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.866 [2024-11-20 05:30:29.643567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.866 [2024-11-20 05:30:29.643583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.866 [2024-11-20 05:30:29.643596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.866 [2024-11-20 05:30:29.643612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.866 [2024-11-20 05:30:29.643626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.866 [2024-11-20 05:30:29.643642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.866 [2024-11-20 05:30:29.643663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.866 [2024-11-20 05:30:29.643685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.866 [2024-11-20 05:30:29.643700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.866 [2024-11-20 05:30:29.643716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.866 [2024-11-20 05:30:29.643730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.866 [2024-11-20 05:30:29.643746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.866 [2024-11-20 05:30:29.643760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.866 [2024-11-20 05:30:29.643789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.866 [2024-11-20 05:30:29.643803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.866 [2024-11-20 05:30:29.643820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.866 [2024-11-20 05:30:29.643834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.866 [2024-11-20 05:30:29.643851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:128384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.866 [2024-11-20 05:30:29.643875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.866 [2024-11-20 05:30:29.643890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5bc0 is same with the state(6) to be set 00:20:21.866 [2024-11-20 05:30:29.643918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.866 [2024-11-20 05:30:29.643932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.866 [2024-11-20 05:30:29.643943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128392 len:8 PRP1 0x0 PRP2 0x0 00:20:21.866 [2024-11-20 05:30:29.643965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.866 [2024-11-20 05:30:29.643981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.866 [2024-11-20 05:30:29.643991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.866 [2024-11-20 05:30:29.644002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128912 len:8 PRP1 0x0 PRP2 0x0 00:20:21.866 [2024-11-20 05:30:29.644016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.866 [2024-11-20 05:30:29.644030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.866 [2024-11-20 05:30:29.644040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.866 [2024-11-20 05:30:29.644051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128920 len:8 PRP1 0x0 PRP2 0x0 00:20:21.866 [2024-11-20 05:30:29.644065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.866 [2024-11-20 05:30:29.644079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.866 [2024-11-20 05:30:29.644089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.866 [2024-11-20 05:30:29.644100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128928 len:8 PRP1 0x0 PRP2 0x0 00:20:21.866 [2024-11-20 05:30:29.644113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.866 [2024-11-20 05:30:29.644127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.866 [2024-11-20 05:30:29.644138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.866 [2024-11-20 05:30:29.644150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128936 len:8 PRP1 0x0 PRP2 0x0 00:20:21.866 [2024-11-20 05:30:29.644182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.866 [2024-11-20 05:30:29.644198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.866 [2024-11-20 05:30:29.644208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.866 [2024-11-20 05:30:29.644219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128944 len:8 PRP1 0x0 PRP2 0x0 00:20:21.866 [2024-11-20 05:30:29.644233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.866 [2024-11-20 05:30:29.644247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.866 [2024-11-20 05:30:29.644257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.866 [2024-11-20 05:30:29.644267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128952 len:8 PRP1 0x0 PRP2 0x0 00:20:21.866 [2024-11-20 05:30:29.644281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.866 [2024-11-20 05:30:29.644295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.866 [2024-11-20 05:30:29.644305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.866 [2024-11-20 05:30:29.644316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128960 len:8 PRP1 0x0 PRP2 0x0 00:20:21.866 [2024-11-20 05:30:29.644329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.866 [2024-11-20 05:30:29.644342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.866 [2024-11-20 05:30:29.644353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.866 [2024-11-20 05:30:29.644370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128968 len:8 PRP1 0x0 PRP2 0x0 00:20:21.866 [2024-11-20 05:30:29.644384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.866 [2024-11-20 05:30:29.644440] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:20:21.866 [2024-11-20 05:30:29.644521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:21.866 [2024-11-20 05:30:29.644544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.866 [2024-11-20 05:30:29.644561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:21.866 [2024-11-20 05:30:29.644575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.866 [2024-11-20 05:30:29.644589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:21.866 [2024-11-20 05:30:29.644603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.866 [2024-11-20 05:30:29.644617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:21.866 [2024-11-20 05:30:29.644631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.866 [2024-11-20 05:30:29.644645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:21.866 [2024-11-20 05:30:29.644704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x757710 (9): Bad file descriptor 00:20:21.866 [2024-11-20 05:30:29.648654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:21.866 [2024-11-20 05:30:29.673160] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:20:21.866 8349.30 IOPS, 32.61 MiB/s [2024-11-20T05:30:36.380Z] 8384.09 IOPS, 32.75 MiB/s [2024-11-20T05:30:36.380Z] 8358.75 IOPS, 32.65 MiB/s [2024-11-20T05:30:36.380Z] 8339.69 IOPS, 32.58 MiB/s [2024-11-20T05:30:36.380Z] 8327.14 IOPS, 32.53 MiB/s [2024-11-20T05:30:36.380Z] 8365.20 IOPS, 32.68 MiB/s 00:20:21.867 Latency(us) 00:20:21.867 [2024-11-20T05:30:36.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.867 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:21.867 Verification LBA range: start 0x0 length 0x4000 00:20:21.867 NVMe0n1 : 15.01 8365.77 32.68 211.93 0.00 14887.78 700.04 18707.55 00:20:21.867 [2024-11-20T05:30:36.380Z] =================================================================================================================== 00:20:21.867 [2024-11-20T05:30:36.380Z] Total : 8365.77 32.68 211.93 0.00 14887.78 700.04 18707.55 00:20:21.867 Received shutdown signal, test time was about 15.000000 seconds 00:20:21.867 00:20:21.867 Latency(us) 00:20:21.867 [2024-11-20T05:30:36.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.867 [2024-11-20T05:30:36.380Z] =================================================================================================================== 00:20:21.867 [2024-11-20T05:30:36.380Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:21.867 05:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:20:21.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:21.867 05:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:20:21.867 05:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:20:21.867 05:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75931 00:20:21.867 05:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75931 /var/tmp/bdevperf.sock 00:20:21.867 05:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:20:21.867 05:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 75931 ']' 00:20:21.867 05:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:21.867 05:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:21.867 05:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:21.867 05:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:21.867 05:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:22.433 05:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:22.433 05:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:20:22.433 05:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:22.691 [2024-11-20 05:30:37.138165] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:22.691 05:30:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:20:23.257 [2024-11-20 05:30:37.622821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:20:23.257 05:30:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:23.822 NVMe0n1 00:20:23.822 05:30:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:24.387 00:20:24.387 05:30:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:24.646 00:20:24.646 05:30:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:24.646 05:30:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:20:24.917 05:30:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:25.484 05:30:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:20:28.765 05:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:28.765 05:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:20:28.765 05:30:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=76025 00:20:28.765 05:30:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:28.765 05:30:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 76025 00:20:30.140 { 00:20:30.140 "results": [ 00:20:30.140 { 00:20:30.140 "job": "NVMe0n1", 00:20:30.140 "core_mask": "0x1", 00:20:30.140 "workload": "verify", 00:20:30.140 "status": "finished", 00:20:30.140 "verify_range": { 00:20:30.140 "start": 0, 00:20:30.140 "length": 16384 00:20:30.140 }, 00:20:30.140 "queue_depth": 128, 00:20:30.140 "io_size": 4096, 00:20:30.140 "runtime": 1.010559, 00:20:30.140 "iops": 6806.134030769109, 00:20:30.140 "mibps": 26.58646105769183, 00:20:30.140 "io_failed": 0, 00:20:30.140 "io_timeout": 0, 00:20:30.140 "avg_latency_us": 18673.73427159058, 00:20:30.140 "min_latency_us": 1630.9527272727273, 00:20:30.140 "max_latency_us": 20494.894545454546 00:20:30.140 } 00:20:30.140 ], 00:20:30.140 "core_count": 1 00:20:30.140 } 00:20:30.140 05:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:30.140 [2024-11-20 05:30:35.508562] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:20:30.140 [2024-11-20 05:30:35.508737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75931 ] 00:20:30.140 [2024-11-20 05:30:35.665682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.140 [2024-11-20 05:30:35.714159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.140 [2024-11-20 05:30:35.749969] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:30.140 [2024-11-20 05:30:39.818935] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:20:30.140 [2024-11-20 05:30:39.819132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.140 [2024-11-20 05:30:39.819177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.140 [2024-11-20 05:30:39.819214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.140 [2024-11-20 05:30:39.819245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.140 [2024-11-20 05:30:39.819275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.140 [2024-11-20 05:30:39.819305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.140 [2024-11-20 05:30:39.819335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.140 [2024-11-20 05:30:39.819363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.141 [2024-11-20 05:30:39.819391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:20:30.141 [2024-11-20 05:30:39.819471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:20:30.141 [2024-11-20 05:30:39.819527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf66710 (9): Bad file descriptor 00:20:30.141 [2024-11-20 05:30:39.828562] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:20:30.141 Running I/O for 1 seconds... 00:20:30.141 6750.00 IOPS, 26.37 MiB/s 00:20:30.141 Latency(us) 00:20:30.141 [2024-11-20T05:30:44.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.141 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:30.141 Verification LBA range: start 0x0 length 0x4000 00:20:30.141 NVMe0n1 : 1.01 6806.13 26.59 0.00 0.00 18673.73 1630.95 20494.89 00:20:30.141 [2024-11-20T05:30:44.654Z] =================================================================================================================== 00:20:30.141 [2024-11-20T05:30:44.654Z] Total : 6806.13 26.59 0.00 0.00 18673.73 1630.95 20494.89 00:20:30.141 05:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:30.141 05:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:20:30.399 05:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:30.965 05:30:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:20:30.965 05:30:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:31.223 05:30:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:31.788 05:30:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:20:35.071 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:35.071 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:20:35.071 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75931 00:20:35.071 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 75931 ']' 00:20:35.071 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 75931 00:20:35.071 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:20:35.071 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:35.071 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75931 00:20:35.071 killing process with pid 75931 00:20:35.071 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:35.071 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:35.071 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75931' 00:20:35.071 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 75931 00:20:35.071 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 75931 00:20:35.071 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:20:35.071 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:35.638 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:20:35.638 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:35.638 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:20:35.638 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:35.638 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:20:35.638 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:35.638 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:20:35.638 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:35.638 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:35.638 rmmod nvme_tcp 00:20:35.638 rmmod nvme_fabrics 00:20:35.638 rmmod nvme_keyring 00:20:35.638 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:35.638 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:20:35.638 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:20:35.638 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 75673 ']' 00:20:35.638 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 75673 00:20:35.638 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 75673 ']' 00:20:35.638 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 75673 00:20:35.638 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:20:35.638 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:35.638 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75673 00:20:35.638 killing process with pid 75673 00:20:35.638 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:35.638 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:35.638 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75673' 00:20:35.638 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 75673 00:20:35.638 05:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 75673 00:20:35.896 05:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:35.896 05:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:35.896 05:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:35.896 05:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:20:35.896 05:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:35.896 05:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:20:35.896 05:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:20:35.896 05:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:35.896 05:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:35.896 05:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:35.896 05:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:35.896 05:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:35.896 05:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:35.896 05:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:35.896 05:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:35.896 05:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:35.896 05:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:35.896 05:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:35.896 05:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:35.896 05:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:35.896 05:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:35.896 05:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:35.897 05:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:35.897 05:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.897 05:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:35.897 05:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.897 05:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:20:35.897 00:20:35.897 real 0m35.289s 00:20:35.897 user 2m18.489s 00:20:35.897 sys 0m6.050s 00:20:35.897 05:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:35.897 ************************************ 00:20:35.897 05:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:35.897 END TEST nvmf_failover 00:20:35.897 ************************************ 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.156 ************************************ 00:20:36.156 START TEST nvmf_host_discovery 00:20:36.156 ************************************ 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:20:36.156 * Looking for test storage... 00:20:36.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:36.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.156 --rc genhtml_branch_coverage=1 00:20:36.156 --rc genhtml_function_coverage=1 00:20:36.156 --rc genhtml_legend=1 00:20:36.156 --rc geninfo_all_blocks=1 00:20:36.156 --rc geninfo_unexecuted_blocks=1 00:20:36.156 00:20:36.156 ' 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:36.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.156 --rc genhtml_branch_coverage=1 00:20:36.156 --rc genhtml_function_coverage=1 00:20:36.156 --rc genhtml_legend=1 00:20:36.156 --rc geninfo_all_blocks=1 00:20:36.156 --rc geninfo_unexecuted_blocks=1 00:20:36.156 00:20:36.156 ' 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:36.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.156 --rc genhtml_branch_coverage=1 00:20:36.156 --rc genhtml_function_coverage=1 00:20:36.156 --rc genhtml_legend=1 00:20:36.156 --rc geninfo_all_blocks=1 00:20:36.156 --rc geninfo_unexecuted_blocks=1 00:20:36.156 00:20:36.156 ' 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:36.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.156 --rc genhtml_branch_coverage=1 00:20:36.156 --rc genhtml_function_coverage=1 00:20:36.156 --rc genhtml_legend=1 00:20:36.156 --rc geninfo_all_blocks=1 00:20:36.156 --rc geninfo_unexecuted_blocks=1 00:20:36.156 00:20:36.156 ' 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:36.156 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:36.157 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:36.157 Cannot find device "nvmf_init_br" 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:36.157 Cannot find device "nvmf_init_br2" 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:20:36.157 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:36.417 Cannot find device "nvmf_tgt_br" 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:36.417 Cannot find device "nvmf_tgt_br2" 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:36.417 Cannot find device "nvmf_init_br" 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:36.417 Cannot find device "nvmf_init_br2" 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:36.417 Cannot find device "nvmf_tgt_br" 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:36.417 Cannot find device "nvmf_tgt_br2" 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:36.417 Cannot find device "nvmf_br" 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:36.417 Cannot find device "nvmf_init_if" 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:36.417 Cannot find device "nvmf_init_if2" 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:36.417 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:36.417 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:36.417 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:36.677 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:36.677 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:36.677 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:36.677 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:36.677 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:36.677 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:36.677 05:30:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:36.677 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:36.677 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:20:36.677 00:20:36.677 --- 10.0.0.3 ping statistics --- 00:20:36.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.677 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:36.677 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:36.677 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:20:36.677 00:20:36.677 --- 10.0.0.4 ping statistics --- 00:20:36.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.677 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:36.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:36.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:20:36.677 00:20:36.677 --- 10.0.0.1 ping statistics --- 00:20:36.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.677 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:36.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:36.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:20:36.677 00:20:36.677 --- 10.0.0.2 ping statistics --- 00:20:36.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.677 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=76353 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 76353 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 76353 ']' 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:36.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:36.677 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:36.677 [2024-11-20 05:30:51.138234] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:20:36.677 [2024-11-20 05:30:51.138346] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.936 [2024-11-20 05:30:51.288852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.936 [2024-11-20 05:30:51.335934] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.936 [2024-11-20 05:30:51.336000] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.936 [2024-11-20 05:30:51.336017] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.936 [2024-11-20 05:30:51.336030] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.936 [2024-11-20 05:30:51.336041] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.936 [2024-11-20 05:30:51.336439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.936 [2024-11-20 05:30:51.373049] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:36.936 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:36.936 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:20:36.936 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:36.936 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:36.936 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.195 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.195 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:37.195 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.195 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.195 [2024-11-20 05:30:51.463079] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:37.195 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.195 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:20:37.195 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.195 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.195 [2024-11-20 05:30:51.471281] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:37.195 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.195 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:20:37.195 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.195 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.195 null0 00:20:37.195 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.195 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:20:37.195 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.195 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.195 null1 00:20:37.195 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.195 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:20:37.195 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.195 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.195 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.195 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76377 00:20:37.195 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:20:37.195 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76377 /tmp/host.sock 00:20:37.195 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 76377 ']' 00:20:37.195 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:20:37.195 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:37.195 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:37.195 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:37.195 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:37.195 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.195 [2024-11-20 05:30:51.554368] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:20:37.195 [2024-11-20 05:30:51.554459] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76377 ] 00:20:37.453 [2024-11-20 05:30:51.708799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.453 [2024-11-20 05:30:51.749597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.453 [2024-11-20 05:30:51.778834] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:37.453 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:37.453 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:20:37.453 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:37.453 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:20:37.454 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.454 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.454 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.454 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:20:37.454 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.454 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.454 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.454 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:20:37.454 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:20:37.454 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:37.454 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.454 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:37.454 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.454 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:37.454 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:37.454 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.454 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:20:37.454 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:20:37.454 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:37.454 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:37.454 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.454 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.454 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:37.454 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:37.454 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.711 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:20:37.711 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:20:37.711 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.711 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.711 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.711 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:20:37.711 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:37.711 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:37.711 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:37.711 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:37.711 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.711 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.712 05:30:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.712 [2024-11-20 05:30:52.195362] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.712 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.971 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.972 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:37.972 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:37.972 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:37.972 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:37.972 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:37.972 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:20:37.972 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:37.972 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:37.972 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.972 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:37.972 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.972 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:37.972 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.972 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:20:37.972 05:30:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:20:38.538 [2024-11-20 05:30:52.857070] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:38.538 [2024-11-20 05:30:52.857135] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:38.538 [2024-11-20 05:30:52.857212] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:38.538 [2024-11-20 05:30:52.863164] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:20:38.538 [2024-11-20 05:30:52.925856] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:20:38.538 [2024-11-20 05:30:52.927366] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2400e50:1 started. 00:20:38.538 [2024-11-20 05:30:52.929693] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:38.538 [2024-11-20 05:30:52.929744] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:38.538 [2024-11-20 05:30:52.935487] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2400e50 was disconnected and freed. delete nvme_qpair. 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:39.104 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:39.363 [2024-11-20 05:30:53.728102] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x23d9640:1 started. 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:39.363 [2024-11-20 05:30:53.735962] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x23d9640 was disconnected and freed. delete nvme_qpair. 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:39.363 [2024-11-20 05:30:53.861479] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:39.363 [2024-11-20 05:30:53.861869] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:39.363 [2024-11-20 05:30:53.861933] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:20:39.363 [2024-11-20 05:30:53.867861] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:39.363 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:39.364 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:39.364 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:39.624 [2024-11-20 05:30:53.930724] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:20:39.624 [2024-11-20 05:30:53.930809] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:39.624 [2024-11-20 05:30:53.930823] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:39.624 [2024-11-20 05:30:53.930829] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:39.624 05:30:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.624 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:20:39.624 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:39.624 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:20:39.624 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:39.624 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:39.624 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:39.624 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:39.624 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:39.624 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:39.624 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:20:39.624 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:39.624 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:39.624 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.624 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:39.624 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.624 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:39.624 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:39.624 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:20:39.624 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:39.624 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:39.625 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.625 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:39.625 [2024-11-20 05:30:54.105186] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:39.625 [2024-11-20 05:30:54.105259] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:39.625 [2024-11-20 05:30:54.106347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.625 [2024-11-20 05:30:54.106412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.625 [2024-11-20 05:30:54.106435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.625 [2024-11-20 05:30:54.106453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.625 [2024-11-20 05:30:54.106471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.625 [2024-11-20 05:30:54.106487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.625 [2024-11-20 05:30:54.106502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.625 [2024-11-20 05:30:54.106518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.625 [2024-11-20 05:30:54.106535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd230 is same with the state(6) to be set 00:20:39.625 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.625 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:39.625 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:39.625 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:39.625 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:39.625 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:39.625 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:20:39.625 [2024-11-20 05:30:54.111177] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:20:39.625 [2024-11-20 05:30:54.111238] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:20:39.625 [2024-11-20 05:30:54.111351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23dd230 (9): Bad file descriptor 00:20:39.625 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:39.625 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:39.625 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.625 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:39.625 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:39.625 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:39.625 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.884 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.884 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:39.884 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:39.884 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:39.884 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:39.884 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:39.884 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:39.884 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:39.885 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.144 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:20:40.144 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:40.144 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:20:40.144 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:20:40.144 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:40.144 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:40.144 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:20:40.144 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:20:40.144 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:40.144 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:40.144 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:40.144 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:40.144 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.144 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:40.144 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.144 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:20:40.144 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:40.144 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:20:40.144 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:20:40.144 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:40.144 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:40.144 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:40.144 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:40.144 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:40.145 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:20:40.145 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:40.145 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:40.145 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.145 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:40.145 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.145 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:20:40.145 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:20:40.145 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:20:40.145 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:40.145 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:40.145 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.145 05:30:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:41.080 [2024-11-20 05:30:55.567260] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:41.080 [2024-11-20 05:30:55.567306] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:41.080 [2024-11-20 05:30:55.567327] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:41.080 [2024-11-20 05:30:55.573307] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:20:41.338 [2024-11-20 05:30:55.631791] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:20:41.338 [2024-11-20 05:30:55.632701] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x240dea0:1 started. 00:20:41.338 [2024-11-20 05:30:55.634179] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:41.338 [2024-11-20 05:30:55.634226] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:41.338 [2024-11-20 05:30:55.636296] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x240dea0 was disconnected and freed. delete nvme_qpair. 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:41.338 request: 00:20:41.338 { 00:20:41.338 "name": "nvme", 00:20:41.338 "trtype": "tcp", 00:20:41.338 "traddr": "10.0.0.3", 00:20:41.338 "adrfam": "ipv4", 00:20:41.338 "trsvcid": "8009", 00:20:41.338 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:41.338 "wait_for_attach": true, 00:20:41.338 "method": "bdev_nvme_start_discovery", 00:20:41.338 "req_id": 1 00:20:41.338 } 00:20:41.338 Got JSON-RPC error response 00:20:41.338 response: 00:20:41.338 { 00:20:41.338 "code": -17, 00:20:41.338 "message": "File exists" 00:20:41.338 } 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:41.338 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:41.339 request: 00:20:41.339 { 00:20:41.339 "name": "nvme_second", 00:20:41.339 "trtype": "tcp", 00:20:41.339 "traddr": "10.0.0.3", 00:20:41.339 "adrfam": "ipv4", 00:20:41.339 "trsvcid": "8009", 00:20:41.339 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:41.339 "wait_for_attach": true, 00:20:41.339 "method": "bdev_nvme_start_discovery", 00:20:41.339 "req_id": 1 00:20:41.339 } 00:20:41.339 Got JSON-RPC error response 00:20:41.339 response: 00:20:41.339 { 00:20:41.339 "code": -17, 00:20:41.339 "message": "File exists" 00:20:41.339 } 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:41.339 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:41.598 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.598 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:41.598 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:41.598 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:20:41.598 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:41.598 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:41.598 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:41.598 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:41.598 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:41.598 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:41.598 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.598 05:30:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:42.560 [2024-11-20 05:30:56.914930] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.560 [2024-11-20 05:30:56.915009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d6a0 with addr=10.0.0.3, port=8010 00:20:42.560 [2024-11-20 05:30:56.915032] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:42.560 [2024-11-20 05:30:56.915043] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:42.560 [2024-11-20 05:30:56.915053] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:20:43.494 [2024-11-20 05:30:57.914974] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.494 [2024-11-20 05:30:57.915082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d6a0 with addr=10.0.0.3, port=8010 00:20:43.494 [2024-11-20 05:30:57.915118] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:43.494 [2024-11-20 05:30:57.915137] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:43.494 [2024-11-20 05:30:57.915154] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:20:44.427 [2024-11-20 05:30:58.914748] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discrequest: 00:20:44.427 { 00:20:44.427 "name": "nvme_second", 00:20:44.427 "trtype": "tcp", 00:20:44.427 "traddr": "10.0.0.3", 00:20:44.427 "adrfam": "ipv4", 00:20:44.427 "trsvcid": "8010", 00:20:44.427 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:44.427 overy ctrlr 00:20:44.427 "wait_for_attach": false, 00:20:44.427 "attach_timeout_ms": 3000, 00:20:44.427 "method": "bdev_nvme_start_discovery", 00:20:44.427 "req_id": 1 00:20:44.427 } 00:20:44.427 Got JSON-RPC error response 00:20:44.427 response: 00:20:44.427 { 00:20:44.427 "code": -110, 00:20:44.427 "message": "Connection timed out" 00:20:44.427 } 00:20:44.427 05:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:44.427 05:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:20:44.427 05:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:44.427 05:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:44.427 05:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:44.427 05:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:20:44.427 05:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:44.427 05:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:44.427 05:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.427 05:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:44.427 05:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.427 05:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:44.427 05:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.731 05:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:20:44.731 05:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:20:44.731 05:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76377 00:20:44.731 05:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:20:44.731 05:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:44.731 05:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:20:44.731 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:44.731 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:20:44.731 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:44.731 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:44.731 rmmod nvme_tcp 00:20:44.731 rmmod nvme_fabrics 00:20:44.731 rmmod nvme_keyring 00:20:44.731 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:44.731 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:20:44.731 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:20:44.731 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 76353 ']' 00:20:44.731 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 76353 00:20:44.731 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 76353 ']' 00:20:44.731 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 76353 00:20:44.731 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:20:44.731 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:44.731 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76353 00:20:44.731 killing process with pid 76353 00:20:44.731 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:44.731 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:44.731 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76353' 00:20:44.731 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 76353 00:20:44.731 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 76353 00:20:44.990 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:44.990 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:44.990 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:44.990 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:20:44.990 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:44.990 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:20:44.990 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:20:44.990 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:44.990 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:44.990 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:44.990 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:44.990 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:44.990 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:44.990 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:44.990 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:44.990 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:44.990 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:44.990 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:44.990 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:44.990 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:44.990 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:44.990 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:44.990 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:44.990 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.990 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:44.990 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.248 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:20:45.248 00:20:45.248 real 0m9.078s 00:20:45.248 user 0m17.097s 00:20:45.248 sys 0m2.021s 00:20:45.248 ************************************ 00:20:45.248 END TEST nvmf_host_discovery 00:20:45.248 ************************************ 00:20:45.248 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:45.248 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:45.248 05:30:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:20:45.248 05:30:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:45.248 05:30:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:45.248 05:30:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.248 ************************************ 00:20:45.248 START TEST nvmf_host_multipath_status 00:20:45.248 ************************************ 00:20:45.248 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:20:45.248 * Looking for test storage... 00:20:45.248 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:45.248 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:45.248 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:20:45.248 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:45.248 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:45.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.249 --rc genhtml_branch_coverage=1 00:20:45.249 --rc genhtml_function_coverage=1 00:20:45.249 --rc genhtml_legend=1 00:20:45.249 --rc geninfo_all_blocks=1 00:20:45.249 --rc geninfo_unexecuted_blocks=1 00:20:45.249 00:20:45.249 ' 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:45.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.249 --rc genhtml_branch_coverage=1 00:20:45.249 --rc genhtml_function_coverage=1 00:20:45.249 --rc genhtml_legend=1 00:20:45.249 --rc geninfo_all_blocks=1 00:20:45.249 --rc geninfo_unexecuted_blocks=1 00:20:45.249 00:20:45.249 ' 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:45.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.249 --rc genhtml_branch_coverage=1 00:20:45.249 --rc genhtml_function_coverage=1 00:20:45.249 --rc genhtml_legend=1 00:20:45.249 --rc geninfo_all_blocks=1 00:20:45.249 --rc geninfo_unexecuted_blocks=1 00:20:45.249 00:20:45.249 ' 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:45.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.249 --rc genhtml_branch_coverage=1 00:20:45.249 --rc genhtml_function_coverage=1 00:20:45.249 --rc genhtml_legend=1 00:20:45.249 --rc geninfo_all_blocks=1 00:20:45.249 --rc geninfo_unexecuted_blocks=1 00:20:45.249 00:20:45.249 ' 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:45.249 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:45.249 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:45.250 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:45.507 Cannot find device "nvmf_init_br" 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:45.508 Cannot find device "nvmf_init_br2" 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:45.508 Cannot find device "nvmf_tgt_br" 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:45.508 Cannot find device "nvmf_tgt_br2" 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:45.508 Cannot find device "nvmf_init_br" 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:45.508 Cannot find device "nvmf_init_br2" 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:45.508 Cannot find device "nvmf_tgt_br" 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:45.508 Cannot find device "nvmf_tgt_br2" 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:45.508 Cannot find device "nvmf_br" 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:45.508 Cannot find device "nvmf_init_if" 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:45.508 Cannot find device "nvmf_init_if2" 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:45.508 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:45.508 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:45.508 05:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:45.508 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:45.508 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:45.765 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:45.765 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:20:45.765 00:20:45.765 --- 10.0.0.3 ping statistics --- 00:20:45.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.765 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:45.765 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:45.765 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:20:45.765 00:20:45.765 --- 10.0.0.4 ping statistics --- 00:20:45.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.765 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:45.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:45.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:20:45.765 00:20:45.765 --- 10.0.0.1 ping statistics --- 00:20:45.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.765 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:45.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:45.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:20:45.765 00:20:45.765 --- 10.0.0.2 ping statistics --- 00:20:45.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.765 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76872 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76872 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 76872 ']' 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:45.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:45.765 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:45.765 [2024-11-20 05:31:00.202996] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:20:45.765 [2024-11-20 05:31:00.203087] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.023 [2024-11-20 05:31:00.349327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:46.023 [2024-11-20 05:31:00.383444] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.023 [2024-11-20 05:31:00.383679] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.023 [2024-11-20 05:31:00.383886] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.023 [2024-11-20 05:31:00.384057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.023 [2024-11-20 05:31:00.384150] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.023 [2024-11-20 05:31:00.384933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.023 [2024-11-20 05:31:00.384939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.023 [2024-11-20 05:31:00.417222] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:46.023 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:46.023 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:20:46.023 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:46.023 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:46.023 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:46.023 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:46.023 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76872 00:20:46.023 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:46.588 [2024-11-20 05:31:00.794581] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.588 05:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:46.846 Malloc0 00:20:46.846 05:31:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:47.105 05:31:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:47.364 05:31:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:47.622 [2024-11-20 05:31:02.088626] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:47.622 05:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:48.188 [2024-11-20 05:31:02.424718] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:48.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:48.188 05:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76920 00:20:48.188 05:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:48.188 05:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:48.188 05:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76920 /var/tmp/bdevperf.sock 00:20:48.188 05:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 76920 ']' 00:20:48.188 05:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:48.188 05:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:48.188 05:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:48.188 05:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:48.188 05:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:48.445 05:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:48.445 05:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:20:48.445 05:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:48.704 05:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:49.270 Nvme0n1 00:20:49.270 05:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:49.528 Nvme0n1 00:20:49.528 05:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:20:49.528 05:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:20:52.057 05:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:20:52.058 05:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:20:52.058 05:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:52.315 05:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:20:53.259 05:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:20:53.259 05:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:53.259 05:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:53.259 05:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:53.825 05:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:53.825 05:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:53.825 05:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:53.825 05:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:54.392 05:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:54.392 05:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:54.392 05:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:54.392 05:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:54.658 05:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:54.658 05:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:54.658 05:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:54.658 05:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:55.227 05:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:55.227 05:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:55.227 05:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:55.227 05:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:55.488 05:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:55.488 05:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:55.488 05:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:55.488 05:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:56.055 05:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:56.055 05:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:20:56.055 05:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:56.314 05:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:56.572 05:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:20:57.948 05:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:20:57.948 05:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:57.948 05:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:57.948 05:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:58.207 05:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:58.207 05:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:58.207 05:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:58.207 05:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:58.465 05:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:58.465 05:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:58.465 05:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:58.465 05:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:58.722 05:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:58.722 05:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:58.722 05:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:58.722 05:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:59.289 05:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:59.289 05:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:59.289 05:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:59.289 05:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:59.546 05:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:59.546 05:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:59.546 05:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:59.547 05:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:59.805 05:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:59.805 05:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:20:59.805 05:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:00.371 05:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:21:01.007 05:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:21:01.941 05:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:21:01.941 05:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:01.941 05:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:01.941 05:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:02.199 05:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:02.199 05:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:02.199 05:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:02.199 05:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:02.457 05:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:02.457 05:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:02.457 05:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:02.457 05:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:02.715 05:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:02.715 05:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:02.715 05:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:02.715 05:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:02.973 05:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:02.973 05:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:02.973 05:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:02.973 05:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:03.540 05:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:03.540 05:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:03.540 05:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:03.540 05:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:03.799 05:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:03.799 05:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:21:03.799 05:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:04.057 05:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:04.316 05:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:21:05.694 05:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:21:05.694 05:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:05.694 05:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:05.694 05:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:05.694 05:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:05.694 05:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:05.694 05:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:05.694 05:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:06.026 05:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:06.026 05:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:06.026 05:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:06.026 05:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:06.284 05:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:06.284 05:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:06.284 05:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:06.284 05:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:06.543 05:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:06.543 05:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:06.543 05:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:06.543 05:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:06.801 05:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:06.801 05:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:06.801 05:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:06.801 05:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:07.366 05:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:07.366 05:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:21:07.366 05:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:07.623 05:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:07.881 05:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:21:08.815 05:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:21:08.815 05:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:08.815 05:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:08.815 05:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:09.074 05:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:09.074 05:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:09.074 05:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:09.074 05:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:09.641 05:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:09.641 05:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:09.641 05:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:09.641 05:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:09.899 05:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:09.899 05:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:09.899 05:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:09.899 05:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:10.157 05:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:10.157 05:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:10.157 05:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:10.157 05:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:10.723 05:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:10.723 05:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:10.723 05:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:10.723 05:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:10.981 05:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:10.981 05:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:21:10.981 05:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:11.546 05:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:11.803 05:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:21:12.733 05:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:21:12.733 05:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:12.733 05:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:12.733 05:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:13.300 05:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:13.300 05:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:13.300 05:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:13.300 05:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:13.866 05:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:13.866 05:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:13.866 05:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:13.866 05:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:14.124 05:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:14.124 05:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:14.124 05:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:14.124 05:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:14.382 05:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:14.382 05:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:14.382 05:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:14.382 05:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:14.639 05:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:14.639 05:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:14.639 05:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:14.639 05:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:14.895 05:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:14.895 05:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:21:15.153 05:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:21:15.153 05:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:21:15.411 05:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:15.670 05:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:21:17.045 05:31:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:21:17.045 05:31:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:17.045 05:31:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:17.045 05:31:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:17.045 05:31:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:17.045 05:31:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:17.045 05:31:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:17.045 05:31:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:17.612 05:31:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:17.612 05:31:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:17.612 05:31:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:17.612 05:31:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:17.870 05:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:17.870 05:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:17.870 05:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:17.870 05:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:18.469 05:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:18.469 05:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:18.469 05:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:18.469 05:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:18.728 05:31:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:18.728 05:31:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:18.987 05:31:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:18.987 05:31:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:19.246 05:31:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:19.246 05:31:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:21:19.246 05:31:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:19.504 05:31:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:19.763 05:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:21:20.698 05:31:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:21:20.698 05:31:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:20.698 05:31:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:20.698 05:31:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:21.265 05:31:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:21.265 05:31:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:21.265 05:31:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:21.265 05:31:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:21.524 05:31:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:21.524 05:31:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:21.524 05:31:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:21.524 05:31:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:21.782 05:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:21.782 05:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:21.782 05:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:21.782 05:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:22.040 05:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:22.040 05:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:22.040 05:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:22.040 05:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:22.298 05:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:22.298 05:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:22.298 05:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:22.298 05:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:22.557 05:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:22.557 05:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:21:22.557 05:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:23.123 05:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:21:23.381 05:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:21:24.317 05:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:21:24.317 05:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:24.317 05:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:24.317 05:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:24.655 05:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:24.655 05:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:24.655 05:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:24.655 05:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:25.220 05:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:25.220 05:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:25.220 05:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:25.220 05:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:25.478 05:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:25.478 05:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:25.478 05:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:25.478 05:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:25.736 05:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:25.736 05:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:25.736 05:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:25.736 05:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:25.994 05:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:25.994 05:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:25.994 05:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:25.995 05:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:26.253 05:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:26.253 05:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:21:26.253 05:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:26.820 05:31:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:27.078 05:31:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:21:28.012 05:31:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:21:28.012 05:31:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:28.270 05:31:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:28.270 05:31:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:28.529 05:31:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:28.529 05:31:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:28.529 05:31:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:28.529 05:31:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:29.094 05:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:29.094 05:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:29.094 05:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:29.094 05:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:29.352 05:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:29.352 05:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:29.352 05:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:29.352 05:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:29.610 05:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:29.610 05:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:29.610 05:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:29.610 05:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:29.868 05:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:29.868 05:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:29.868 05:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:29.868 05:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:30.184 05:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:30.184 05:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76920 00:21:30.184 05:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 76920 ']' 00:21:30.184 05:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 76920 00:21:30.184 05:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:21:30.184 05:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:30.184 05:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76920 00:21:30.448 killing process with pid 76920 00:21:30.448 05:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:21:30.448 05:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:21:30.448 05:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76920' 00:21:30.448 05:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 76920 00:21:30.448 05:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 76920 00:21:30.448 { 00:21:30.448 "results": [ 00:21:30.448 { 00:21:30.448 "job": "Nvme0n1", 00:21:30.448 "core_mask": "0x4", 00:21:30.448 "workload": "verify", 00:21:30.448 "status": "terminated", 00:21:30.448 "verify_range": { 00:21:30.448 "start": 0, 00:21:30.448 "length": 16384 00:21:30.448 }, 00:21:30.448 "queue_depth": 128, 00:21:30.448 "io_size": 4096, 00:21:30.448 "runtime": 40.5791, 00:21:30.448 "iops": 7760.15239371992, 00:21:30.448 "mibps": 30.313095287968437, 00:21:30.448 "io_failed": 0, 00:21:30.448 "io_timeout": 0, 00:21:30.448 "avg_latency_us": 16461.875081751783, 00:21:30.448 "min_latency_us": 180.59636363636363, 00:21:30.448 "max_latency_us": 5033164.8 00:21:30.448 } 00:21:30.448 ], 00:21:30.448 "core_count": 1 00:21:30.448 } 00:21:30.448 05:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76920 00:21:30.448 05:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:30.448 [2024-11-20 05:31:02.509839] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:21:30.448 [2024-11-20 05:31:02.510003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76920 ] 00:21:30.448 [2024-11-20 05:31:02.663644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.448 [2024-11-20 05:31:02.713243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:30.448 [2024-11-20 05:31:02.752230] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:30.448 Running I/O for 90 seconds... 00:21:30.448 6293.00 IOPS, 24.58 MiB/s [2024-11-20T05:31:44.961Z] 6248.00 IOPS, 24.41 MiB/s [2024-11-20T05:31:44.961Z] 6634.67 IOPS, 25.92 MiB/s [2024-11-20T05:31:44.961Z] 6708.00 IOPS, 26.20 MiB/s [2024-11-20T05:31:44.961Z] 6747.20 IOPS, 26.36 MiB/s [2024-11-20T05:31:44.961Z] 6993.33 IOPS, 27.32 MiB/s [2024-11-20T05:31:44.961Z] 7136.71 IOPS, 27.88 MiB/s [2024-11-20T05:31:44.961Z] 7325.50 IOPS, 28.62 MiB/s [2024-11-20T05:31:44.961Z] 7433.33 IOPS, 29.04 MiB/s [2024-11-20T05:31:44.961Z] 7578.80 IOPS, 29.60 MiB/s [2024-11-20T05:31:44.961Z] 7675.18 IOPS, 29.98 MiB/s [2024-11-20T05:31:44.961Z] 7781.00 IOPS, 30.39 MiB/s [2024-11-20T05:31:44.961Z] 7863.54 IOPS, 30.72 MiB/s [2024-11-20T05:31:44.961Z] 7955.00 IOPS, 31.07 MiB/s [2024-11-20T05:31:44.961Z] 8022.47 IOPS, 31.34 MiB/s [2024-11-20T05:31:44.961Z] 8095.06 IOPS, 31.62 MiB/s [2024-11-20T05:31:44.961Z] 8158.18 IOPS, 31.87 MiB/s [2024-11-20T05:31:44.961Z] [2024-11-20 05:31:21.887550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:116232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.448 [2024-11-20 05:31:21.887620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:30.448 [2024-11-20 05:31:21.887682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:116240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.448 [2024-11-20 05:31:21.887706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:30.448 [2024-11-20 05:31:21.887729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.448 [2024-11-20 05:31:21.887746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.887768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:116256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.449 [2024-11-20 05:31:21.887784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.887825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:116264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.449 [2024-11-20 05:31:21.887841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.887863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:116272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.449 [2024-11-20 05:31:21.887879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.887914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:116280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.449 [2024-11-20 05:31:21.887933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.887956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:116288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.449 [2024-11-20 05:31:21.887974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.888010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.449 [2024-11-20 05:31:21.888059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.888084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:116304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.449 [2024-11-20 05:31:21.888100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.888123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:116312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.449 [2024-11-20 05:31:21.888148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.888182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:116320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.449 [2024-11-20 05:31:21.888199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.888221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.449 [2024-11-20 05:31:21.888237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.888259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:116336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.449 [2024-11-20 05:31:21.888275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.888297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.449 [2024-11-20 05:31:21.888312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.888335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:116352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.449 [2024-11-20 05:31:21.888351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.888373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.449 [2024-11-20 05:31:21.888389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.888411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:115728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.449 [2024-11-20 05:31:21.888427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.888449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:115736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.449 [2024-11-20 05:31:21.888465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.888486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:115744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.449 [2024-11-20 05:31:21.888501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.888523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:115752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.449 [2024-11-20 05:31:21.888538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.888573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.449 [2024-11-20 05:31:21.888590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.888614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:115768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.449 [2024-11-20 05:31:21.888630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.888653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:115776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.449 [2024-11-20 05:31:21.888669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.888691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:115784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.449 [2024-11-20 05:31:21.888706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.888729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:115792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.449 [2024-11-20 05:31:21.888745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.888767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:115800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.449 [2024-11-20 05:31:21.888782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.888804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:115808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.449 [2024-11-20 05:31:21.888820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.888842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:115816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.449 [2024-11-20 05:31:21.888858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.888879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:115824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.449 [2024-11-20 05:31:21.888895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.888942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.449 [2024-11-20 05:31:21.888960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.888983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:115840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.449 [2024-11-20 05:31:21.888999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.889026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:116360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.449 [2024-11-20 05:31:21.889043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.889077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:116368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.449 [2024-11-20 05:31:21.889094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.889117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:116376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.449 [2024-11-20 05:31:21.889133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.889155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:116384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.449 [2024-11-20 05:31:21.889171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.889193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:116392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.449 [2024-11-20 05:31:21.889208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.889230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:116400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.449 [2024-11-20 05:31:21.889246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.889270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:116408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.449 [2024-11-20 05:31:21.889286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.889308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:116416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.449 [2024-11-20 05:31:21.889325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.889347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:115848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.449 [2024-11-20 05:31:21.889363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:30.449 [2024-11-20 05:31:21.889385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:115856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.450 [2024-11-20 05:31:21.889401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.889423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:115864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.450 [2024-11-20 05:31:21.889439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.889461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:115872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.450 [2024-11-20 05:31:21.889477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.889498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:115880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.450 [2024-11-20 05:31:21.889514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.889544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:115888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.450 [2024-11-20 05:31:21.889562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.889584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:115896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.450 [2024-11-20 05:31:21.889600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.889622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:115904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.450 [2024-11-20 05:31:21.889638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.889660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.450 [2024-11-20 05:31:21.889676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.889698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:115920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.450 [2024-11-20 05:31:21.889713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.889735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:115928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.450 [2024-11-20 05:31:21.889751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.889773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:115936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.450 [2024-11-20 05:31:21.889788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.889810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.450 [2024-11-20 05:31:21.889826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.889848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:115952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.450 [2024-11-20 05:31:21.889864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.889887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:115960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.450 [2024-11-20 05:31:21.889915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.889942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:115968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.450 [2024-11-20 05:31:21.889959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.889994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:116424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.450 [2024-11-20 05:31:21.890018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.890042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:116432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.450 [2024-11-20 05:31:21.890068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.890091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:116440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.450 [2024-11-20 05:31:21.890108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.890130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:116448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.450 [2024-11-20 05:31:21.890146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.890168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:116456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.450 [2024-11-20 05:31:21.890183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.890206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:116464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.450 [2024-11-20 05:31:21.890221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.890243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:116472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.450 [2024-11-20 05:31:21.890259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.890281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:116480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.450 [2024-11-20 05:31:21.890297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.890319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:116488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.450 [2024-11-20 05:31:21.890335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.890356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:116496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.450 [2024-11-20 05:31:21.890372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.890394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:116504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.450 [2024-11-20 05:31:21.890410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.890432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:116512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.450 [2024-11-20 05:31:21.890448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.890469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:116520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.450 [2024-11-20 05:31:21.890486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.890508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:116528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.450 [2024-11-20 05:31:21.890531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.890555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:116536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.450 [2024-11-20 05:31:21.890572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.890594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:116544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.450 [2024-11-20 05:31:21.890609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.890631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:115976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.450 [2024-11-20 05:31:21.890647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.890669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:115984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.450 [2024-11-20 05:31:21.890685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.890707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.450 [2024-11-20 05:31:21.890723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.890745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:116000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.450 [2024-11-20 05:31:21.890761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.890783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:116008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.450 [2024-11-20 05:31:21.890799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.890821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.450 [2024-11-20 05:31:21.890837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.890858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:116024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.450 [2024-11-20 05:31:21.890874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:30.450 [2024-11-20 05:31:21.890896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:116032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.450 [2024-11-20 05:31:21.890932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.890957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:116040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.451 [2024-11-20 05:31:21.890974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.890996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:116048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.451 [2024-11-20 05:31:21.891020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.891044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:116056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.451 [2024-11-20 05:31:21.891060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.891082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:116064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.451 [2024-11-20 05:31:21.891099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.891121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:116072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.451 [2024-11-20 05:31:21.891137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.891159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:116080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.451 [2024-11-20 05:31:21.891175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.891203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:116088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.451 [2024-11-20 05:31:21.891220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.891243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:116096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.451 [2024-11-20 05:31:21.891259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.891311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:116552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.451 [2024-11-20 05:31:21.891332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.891355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:116560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.451 [2024-11-20 05:31:21.891372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.891394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:116568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.451 [2024-11-20 05:31:21.891410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.891432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:116576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.451 [2024-11-20 05:31:21.891448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.891470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.451 [2024-11-20 05:31:21.891486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.891508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:116592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.451 [2024-11-20 05:31:21.891523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.891558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:116600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.451 [2024-11-20 05:31:21.891575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.891597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:116608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.451 [2024-11-20 05:31:21.891613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.891635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:116104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.451 [2024-11-20 05:31:21.891651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.891673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:116112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.451 [2024-11-20 05:31:21.891689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.891711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:116120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.451 [2024-11-20 05:31:21.891727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.891749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:116128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.451 [2024-11-20 05:31:21.891764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.891798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:116136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.451 [2024-11-20 05:31:21.891817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.891840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.451 [2024-11-20 05:31:21.891856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.891883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.451 [2024-11-20 05:31:21.891927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.891956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:116160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.451 [2024-11-20 05:31:21.891973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.891995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:116168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.451 [2024-11-20 05:31:21.892011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.892033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.451 [2024-11-20 05:31:21.892049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.892087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:116184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.451 [2024-11-20 05:31:21.892104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.892126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:116192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.451 [2024-11-20 05:31:21.892141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.892164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:116200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.451 [2024-11-20 05:31:21.892185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.892207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.451 [2024-11-20 05:31:21.892223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.892245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:116216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.451 [2024-11-20 05:31:21.892260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.893058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:116224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.451 [2024-11-20 05:31:21.893089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.893125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.451 [2024-11-20 05:31:21.893143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.893173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:116624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.451 [2024-11-20 05:31:21.893189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.893219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:116632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.451 [2024-11-20 05:31:21.893235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.893265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:116640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.451 [2024-11-20 05:31:21.893281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.893310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:116648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.451 [2024-11-20 05:31:21.893327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.893356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:116656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.451 [2024-11-20 05:31:21.893372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:30.451 [2024-11-20 05:31:21.893405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:116664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.452 [2024-11-20 05:31:21.893435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:21.893486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:116672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.452 [2024-11-20 05:31:21.893507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:21.893538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:116680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.452 [2024-11-20 05:31:21.893555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:21.893584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:116688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.452 [2024-11-20 05:31:21.893601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:21.893630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:116696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.452 [2024-11-20 05:31:21.893647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:21.893676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:116704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.452 [2024-11-20 05:31:21.893692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:21.893722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:116712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.452 [2024-11-20 05:31:21.893738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:21.893768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:116720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.452 [2024-11-20 05:31:21.893784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:21.893813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:116728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.452 [2024-11-20 05:31:21.893829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:21.893862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:116736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.452 [2024-11-20 05:31:21.893879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:30.452 8085.39 IOPS, 31.58 MiB/s [2024-11-20T05:31:44.965Z] 7659.84 IOPS, 29.92 MiB/s [2024-11-20T05:31:44.965Z] 7276.85 IOPS, 28.43 MiB/s [2024-11-20T05:31:44.965Z] 6930.33 IOPS, 27.07 MiB/s [2024-11-20T05:31:44.965Z] 6615.32 IOPS, 25.84 MiB/s [2024-11-20T05:31:44.965Z] 6419.13 IOPS, 25.07 MiB/s [2024-11-20T05:31:44.965Z] 6484.00 IOPS, 25.33 MiB/s [2024-11-20T05:31:44.965Z] 6572.48 IOPS, 25.67 MiB/s [2024-11-20T05:31:44.965Z] 6700.62 IOPS, 26.17 MiB/s [2024-11-20T05:31:44.965Z] 6914.52 IOPS, 27.01 MiB/s [2024-11-20T05:31:44.965Z] 7029.71 IOPS, 27.46 MiB/s [2024-11-20T05:31:44.965Z] 7110.62 IOPS, 27.78 MiB/s [2024-11-20T05:31:44.965Z] 7190.90 IOPS, 28.09 MiB/s [2024-11-20T05:31:44.965Z] 7250.29 IOPS, 28.32 MiB/s [2024-11-20T05:31:44.965Z] 7306.22 IOPS, 28.54 MiB/s [2024-11-20T05:31:44.965Z] 7352.94 IOPS, 28.72 MiB/s [2024-11-20T05:31:44.965Z] 7445.53 IOPS, 29.08 MiB/s [2024-11-20T05:31:44.965Z] 7557.63 IOPS, 29.52 MiB/s [2024-11-20T05:31:44.965Z] 7681.25 IOPS, 30.00 MiB/s [2024-11-20T05:31:44.965Z] 7736.95 IOPS, 30.22 MiB/s [2024-11-20T05:31:44.965Z] [2024-11-20 05:31:41.503431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:89112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.452 [2024-11-20 05:31:41.503567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:41.503632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.452 [2024-11-20 05:31:41.503667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:41.503708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.452 [2024-11-20 05:31:41.503741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:41.503781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:89144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.452 [2024-11-20 05:31:41.503827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:41.503869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.452 [2024-11-20 05:31:41.503899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:41.503965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.452 [2024-11-20 05:31:41.503996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:41.504036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:89176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.452 [2024-11-20 05:31:41.504065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:41.504104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:89208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.452 [2024-11-20 05:31:41.504134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:41.504171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:89248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.452 [2024-11-20 05:31:41.504197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:41.504231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.452 [2024-11-20 05:31:41.504257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:41.504291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.452 [2024-11-20 05:31:41.504317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:41.504351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:89296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.452 [2024-11-20 05:31:41.504379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:41.504418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.452 [2024-11-20 05:31:41.504467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:41.504510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.452 [2024-11-20 05:31:41.504541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:41.504579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:89336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.452 [2024-11-20 05:31:41.504609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:41.504648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:89368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.452 [2024-11-20 05:31:41.504678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:41.504718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:89400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.452 [2024-11-20 05:31:41.504748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:41.504789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.452 [2024-11-20 05:31:41.504820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:41.504861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:89416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.452 [2024-11-20 05:31:41.504893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:41.504961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:89448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.452 [2024-11-20 05:31:41.504994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:41.505034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.452 [2024-11-20 05:31:41.505066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:41.505105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:89272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.452 [2024-11-20 05:31:41.505134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:41.505174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.452 [2024-11-20 05:31:41.505205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:41.505243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.452 [2024-11-20 05:31:41.505273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:41.505312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.452 [2024-11-20 05:31:41.505342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:30.452 [2024-11-20 05:31:41.505400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.453 [2024-11-20 05:31:41.505432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.505471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:89328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.453 [2024-11-20 05:31:41.505502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.505540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:89360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.453 [2024-11-20 05:31:41.505570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.505608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.453 [2024-11-20 05:31:41.505638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.505676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.453 [2024-11-20 05:31:41.505707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.505745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.453 [2024-11-20 05:31:41.505774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.505812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.453 [2024-11-20 05:31:41.505842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.505881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:89456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.453 [2024-11-20 05:31:41.505932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.505976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:89480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.453 [2024-11-20 05:31:41.506009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.506049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:89512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.453 [2024-11-20 05:31:41.506079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.506118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.453 [2024-11-20 05:31:41.506148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.506187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.453 [2024-11-20 05:31:41.506218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.506306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.453 [2024-11-20 05:31:41.506343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.506384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.453 [2024-11-20 05:31:41.506416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.506455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.453 [2024-11-20 05:31:41.506486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.506525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:89600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.453 [2024-11-20 05:31:41.506555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.506593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:89632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.453 [2024-11-20 05:31:41.506625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.508213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:89488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.453 [2024-11-20 05:31:41.508265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.508318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:89520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.453 [2024-11-20 05:31:41.508352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.508394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:89552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.453 [2024-11-20 05:31:41.508426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.508467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.453 [2024-11-20 05:31:41.508498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.508537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.453 [2024-11-20 05:31:41.508566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.508605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.453 [2024-11-20 05:31:41.508635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.508673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.453 [2024-11-20 05:31:41.508703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.508743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:89592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.453 [2024-11-20 05:31:41.508793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.508836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:89624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.453 [2024-11-20 05:31:41.508867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.508926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:89656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.453 [2024-11-20 05:31:41.508962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.509003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.453 [2024-11-20 05:31:41.509034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.509072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.453 [2024-11-20 05:31:41.509101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.509141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.453 [2024-11-20 05:31:41.509172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.509210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:89704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.453 [2024-11-20 05:31:41.509241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.509309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.453 [2024-11-20 05:31:41.509346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.509387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.453 [2024-11-20 05:31:41.509418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:30.453 [2024-11-20 05:31:41.509458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:89144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.454 [2024-11-20 05:31:41.509488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.509527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.454 [2024-11-20 05:31:41.509559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.509598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:89208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.454 [2024-11-20 05:31:41.509629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.509668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.454 [2024-11-20 05:31:41.509717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.509762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.454 [2024-11-20 05:31:41.509793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.509833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.454 [2024-11-20 05:31:41.509863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.509922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:89368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.454 [2024-11-20 05:31:41.509957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.509998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.454 [2024-11-20 05:31:41.510030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.510069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:89448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.454 [2024-11-20 05:31:41.510099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.510139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.454 [2024-11-20 05:31:41.510169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.510208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.454 [2024-11-20 05:31:41.510238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.510275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.454 [2024-11-20 05:31:41.510306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.510346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.454 [2024-11-20 05:31:41.510375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.510413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.454 [2024-11-20 05:31:41.510444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.510483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:89424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.454 [2024-11-20 05:31:41.510513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.510551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:89480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.454 [2024-11-20 05:31:41.510581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.510640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:89544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.454 [2024-11-20 05:31:41.510672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.510712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.454 [2024-11-20 05:31:41.510744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.510783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.454 [2024-11-20 05:31:41.510811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.510849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:89632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.454 [2024-11-20 05:31:41.510881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.513062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.454 [2024-11-20 05:31:41.513115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.513169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.454 [2024-11-20 05:31:41.513204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.513253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.454 [2024-11-20 05:31:41.513285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.513325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.454 [2024-11-20 05:31:41.513355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.513394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:89696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.454 [2024-11-20 05:31:41.513425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.513464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.454 [2024-11-20 05:31:41.513493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.513532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:89720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.454 [2024-11-20 05:31:41.513561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.513599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:89520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.454 [2024-11-20 05:31:41.513629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.513714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.454 [2024-11-20 05:31:41.513748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.513787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.454 [2024-11-20 05:31:41.513818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.513857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:89592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.454 [2024-11-20 05:31:41.513887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.513951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:89656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.454 [2024-11-20 05:31:41.513984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.514025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.454 [2024-11-20 05:31:41.514056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.514094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:89704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.454 [2024-11-20 05:31:41.514124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.514163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.454 [2024-11-20 05:31:41.514194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.514231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.454 [2024-11-20 05:31:41.514262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.514301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.454 [2024-11-20 05:31:41.514332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.514369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.454 [2024-11-20 05:31:41.514401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:30.454 [2024-11-20 05:31:41.514440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.454 [2024-11-20 05:31:41.514470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.514510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:89272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.455 [2024-11-20 05:31:41.514542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.514581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.455 [2024-11-20 05:31:41.514632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.514674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.455 [2024-11-20 05:31:41.514703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.514741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.455 [2024-11-20 05:31:41.514774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.514815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.455 [2024-11-20 05:31:41.514844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.514881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:89632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.455 [2024-11-20 05:31:41.514937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.514980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:89776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.455 [2024-11-20 05:31:41.515010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.515050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:89808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.455 [2024-11-20 05:31:41.515080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.515125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.455 [2024-11-20 05:31:41.515158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.515198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.455 [2024-11-20 05:31:41.515229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.515268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.455 [2024-11-20 05:31:41.515299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.515338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.455 [2024-11-20 05:31:41.515370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.515409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.455 [2024-11-20 05:31:41.515439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.515479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:89832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.455 [2024-11-20 05:31:41.515528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.515569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:89856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.455 [2024-11-20 05:31:41.515600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.515640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:89864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.455 [2024-11-20 05:31:41.515671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.515708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:89904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.455 [2024-11-20 05:31:41.515738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.515778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:89944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.455 [2024-11-20 05:31:41.515825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.517783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:89976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.455 [2024-11-20 05:31:41.517848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.517897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.455 [2024-11-20 05:31:41.517952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.517989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.455 [2024-11-20 05:31:41.518018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.518052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.455 [2024-11-20 05:31:41.518078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.518110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.455 [2024-11-20 05:31:41.518136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.518169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.455 [2024-11-20 05:31:41.518195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.518228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:89992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.455 [2024-11-20 05:31:41.518253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.518289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.455 [2024-11-20 05:31:41.518316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.518375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.455 [2024-11-20 05:31:41.518403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.518445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.455 [2024-11-20 05:31:41.518474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.518511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.455 [2024-11-20 05:31:41.518538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.518575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:89520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.455 [2024-11-20 05:31:41.518601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.518637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.455 [2024-11-20 05:31:41.518663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.518698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:89656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.455 [2024-11-20 05:31:41.518725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.518760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.455 [2024-11-20 05:31:41.518786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.518823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.455 [2024-11-20 05:31:41.518849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.518883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.455 [2024-11-20 05:31:41.518933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.518970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:89272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.455 [2024-11-20 05:31:41.518998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.519036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.455 [2024-11-20 05:31:41.519062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.519098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.455 [2024-11-20 05:31:41.519126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:30.455 [2024-11-20 05:31:41.519186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:89776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.456 [2024-11-20 05:31:41.519217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.519255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.456 [2024-11-20 05:31:41.519284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.519323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.456 [2024-11-20 05:31:41.519354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.519420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.456 [2024-11-20 05:31:41.519456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.519498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:89856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.456 [2024-11-20 05:31:41.519528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.519567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:89904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.456 [2024-11-20 05:31:41.519597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.519638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:90016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.456 [2024-11-20 05:31:41.519669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.519708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.456 [2024-11-20 05:31:41.519738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.519778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:90088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.456 [2024-11-20 05:31:41.519824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.519865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.456 [2024-11-20 05:31:41.519896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.522339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.456 [2024-11-20 05:31:41.522414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.522470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.456 [2024-11-20 05:31:41.522504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.522546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:90424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.456 [2024-11-20 05:31:41.522597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.522639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.456 [2024-11-20 05:31:41.522671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.522711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.456 [2024-11-20 05:31:41.522742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.522780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:89752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.456 [2024-11-20 05:31:41.522811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.522850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:89816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.456 [2024-11-20 05:31:41.522881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.522943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.456 [2024-11-20 05:31:41.522977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.523018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.456 [2024-11-20 05:31:41.523049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.523087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.456 [2024-11-20 05:31:41.523116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.523154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.456 [2024-11-20 05:31:41.523184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.523222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.456 [2024-11-20 05:31:41.523252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.523291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.456 [2024-11-20 05:31:41.523321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.523360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:89520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.456 [2024-11-20 05:31:41.523390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.523430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.456 [2024-11-20 05:31:41.523481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.523533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.456 [2024-11-20 05:31:41.523564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.523603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:89272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.456 [2024-11-20 05:31:41.523633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.523672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.456 [2024-11-20 05:31:41.523702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.523739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.456 [2024-11-20 05:31:41.523770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.523830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.456 [2024-11-20 05:31:41.523863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.523921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:89904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.456 [2024-11-20 05:31:41.523955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.523995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:90048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.456 [2024-11-20 05:31:41.524027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.524067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.456 [2024-11-20 05:31:41.524098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.525399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:89952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.456 [2024-11-20 05:31:41.525463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.525541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.456 [2024-11-20 05:31:41.525581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.525623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.456 [2024-11-20 05:31:41.525655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.525693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.456 [2024-11-20 05:31:41.525726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.525794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.456 [2024-11-20 05:31:41.525825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.525865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.456 [2024-11-20 05:31:41.525895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:30.456 [2024-11-20 05:31:41.525971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.457 [2024-11-20 05:31:41.526004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:30.457 [2024-11-20 05:31:41.526045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:90152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.457 [2024-11-20 05:31:41.526076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:30.457 [2024-11-20 05:31:41.526115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:90184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.457 [2024-11-20 05:31:41.526145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:30.457 7737.82 IOPS, 30.23 MiB/s [2024-11-20T05:31:44.970Z] 7739.00 IOPS, 30.23 MiB/s [2024-11-20T05:31:44.970Z] 7759.90 IOPS, 30.31 MiB/s [2024-11-20T05:31:44.970Z] Received shutdown signal, test time was about 40.580140 seconds 00:21:30.457 00:21:30.457 Latency(us) 00:21:30.457 [2024-11-20T05:31:44.970Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.457 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:30.457 Verification LBA range: start 0x0 length 0x4000 00:21:30.457 Nvme0n1 : 40.58 7760.15 30.31 0.00 0.00 16461.88 180.60 5033164.80 00:21:30.457 [2024-11-20T05:31:44.970Z] =================================================================================================================== 00:21:30.457 [2024-11-20T05:31:44.970Z] Total : 7760.15 30.31 0.00 0.00 16461.88 180.60 5033164.80 00:21:30.457 05:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:30.716 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:21:30.716 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:30.716 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:21:30.716 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:30.716 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:21:30.716 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:30.716 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:21:30.716 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:30.716 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:30.716 rmmod nvme_tcp 00:21:30.716 rmmod nvme_fabrics 00:21:30.716 rmmod nvme_keyring 00:21:30.975 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:30.975 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:21:30.975 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:21:30.975 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76872 ']' 00:21:30.975 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76872 00:21:30.975 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 76872 ']' 00:21:30.975 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 76872 00:21:30.975 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:21:30.975 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:30.975 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76872 00:21:30.975 killing process with pid 76872 00:21:30.975 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:30.975 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:30.975 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76872' 00:21:30.975 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 76872 00:21:30.975 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 76872 00:21:30.975 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:30.975 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:30.975 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:30.975 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:21:30.975 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:21:30.975 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:30.975 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:21:30.976 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:30.976 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:30.976 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:30.976 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:30.976 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:30.976 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:31.234 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:31.234 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:31.234 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:31.234 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:31.234 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:31.234 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:31.234 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:31.234 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:31.234 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:31.234 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:31.234 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.234 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.234 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.234 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:21:31.234 ************************************ 00:21:31.234 END TEST nvmf_host_multipath_status 00:21:31.234 ************************************ 00:21:31.234 00:21:31.234 real 0m46.084s 00:21:31.234 user 2m32.034s 00:21:31.234 sys 0m13.859s 00:21:31.234 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:31.234 05:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:31.234 05:31:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:31.234 05:31:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:31.235 05:31:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:31.235 05:31:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.235 ************************************ 00:21:31.235 START TEST nvmf_discovery_remove_ifc 00:21:31.235 ************************************ 00:21:31.235 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:31.494 * Looking for test storage... 00:21:31.494 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:31.494 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:31.494 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:21:31.494 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:31.494 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:31.494 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:31.494 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:31.494 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:31.494 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:21:31.494 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:21:31.494 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:21:31.494 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:21:31.494 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:21:31.494 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:31.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.495 --rc genhtml_branch_coverage=1 00:21:31.495 --rc genhtml_function_coverage=1 00:21:31.495 --rc genhtml_legend=1 00:21:31.495 --rc geninfo_all_blocks=1 00:21:31.495 --rc geninfo_unexecuted_blocks=1 00:21:31.495 00:21:31.495 ' 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:31.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.495 --rc genhtml_branch_coverage=1 00:21:31.495 --rc genhtml_function_coverage=1 00:21:31.495 --rc genhtml_legend=1 00:21:31.495 --rc geninfo_all_blocks=1 00:21:31.495 --rc geninfo_unexecuted_blocks=1 00:21:31.495 00:21:31.495 ' 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:31.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.495 --rc genhtml_branch_coverage=1 00:21:31.495 --rc genhtml_function_coverage=1 00:21:31.495 --rc genhtml_legend=1 00:21:31.495 --rc geninfo_all_blocks=1 00:21:31.495 --rc geninfo_unexecuted_blocks=1 00:21:31.495 00:21:31.495 ' 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:31.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.495 --rc genhtml_branch_coverage=1 00:21:31.495 --rc genhtml_function_coverage=1 00:21:31.495 --rc genhtml_legend=1 00:21:31.495 --rc geninfo_all_blocks=1 00:21:31.495 --rc geninfo_unexecuted_blocks=1 00:21:31.495 00:21:31.495 ' 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:31.495 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:21:31.495 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:31.496 Cannot find device "nvmf_init_br" 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:31.496 Cannot find device "nvmf_init_br2" 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:31.496 Cannot find device "nvmf_tgt_br" 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:31.496 Cannot find device "nvmf_tgt_br2" 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:31.496 Cannot find device "nvmf_init_br" 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:31.496 Cannot find device "nvmf_init_br2" 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:31.496 Cannot find device "nvmf_tgt_br" 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:31.496 Cannot find device "nvmf_tgt_br2" 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:31.496 Cannot find device "nvmf_br" 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:21:31.496 05:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:31.755 Cannot find device "nvmf_init_if" 00:21:31.755 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:21:31.755 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:31.755 Cannot find device "nvmf_init_if2" 00:21:31.755 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:21:31.755 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:31.755 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:31.755 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:21:31.755 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:31.755 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:31.755 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:21:31.755 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:31.755 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:31.755 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:31.755 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:31.755 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:31.755 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:31.755 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:31.755 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:31.756 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:31.756 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:21:31.756 00:21:31.756 --- 10.0.0.3 ping statistics --- 00:21:31.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.756 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:31.756 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:31.756 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:21:31.756 00:21:31.756 --- 10.0.0.4 ping statistics --- 00:21:31.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.756 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:31.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:31.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:21:31.756 00:21:31.756 --- 10.0.0.1 ping statistics --- 00:21:31.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.756 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:31.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:31.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:21:31.756 00:21:31.756 --- 10.0.0.2 ping statistics --- 00:21:31.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.756 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:31.756 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:32.014 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:21:32.014 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:32.014 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:32.014 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:32.014 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77813 00:21:32.014 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:32.015 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77813 00:21:32.015 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 77813 ']' 00:21:32.015 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.015 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:32.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.015 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.015 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:32.015 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:32.015 [2024-11-20 05:31:46.343513] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:21:32.015 [2024-11-20 05:31:46.343593] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:32.015 [2024-11-20 05:31:46.495302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.274 [2024-11-20 05:31:46.528427] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:32.274 [2024-11-20 05:31:46.528487] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:32.274 [2024-11-20 05:31:46.528500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:32.274 [2024-11-20 05:31:46.528508] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:32.274 [2024-11-20 05:31:46.528515] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:32.274 [2024-11-20 05:31:46.528831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.274 [2024-11-20 05:31:46.559130] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:32.274 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:32.274 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:21:32.274 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:32.274 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:32.274 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:32.274 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.274 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:21:32.274 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.275 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:32.275 [2024-11-20 05:31:46.663346] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.275 [2024-11-20 05:31:46.671481] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:21:32.275 null0 00:21:32.275 [2024-11-20 05:31:46.703425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:32.275 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.275 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77839 00:21:32.275 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:21:32.275 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77839 /tmp/host.sock 00:21:32.275 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 77839 ']' 00:21:32.275 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:21:32.275 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:32.275 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:32.275 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:32.275 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:32.275 05:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:32.275 [2024-11-20 05:31:46.776113] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:21:32.275 [2024-11-20 05:31:46.776198] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77839 ] 00:21:32.534 [2024-11-20 05:31:46.918582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.534 [2024-11-20 05:31:46.963174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.534 05:31:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:32.534 05:31:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:21:32.534 05:31:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:32.534 05:31:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:21:32.534 05:31:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.534 05:31:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:32.534 05:31:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.534 05:31:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:21:32.534 05:31:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.534 05:31:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:32.793 [2024-11-20 05:31:47.048689] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:32.793 05:31:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.793 05:31:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:21:32.793 05:31:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.793 05:31:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:33.728 [2024-11-20 05:31:48.086438] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:21:33.728 [2024-11-20 05:31:48.086479] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:21:33.728 [2024-11-20 05:31:48.086503] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:33.728 [2024-11-20 05:31:48.092504] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:21:33.728 [2024-11-20 05:31:48.146930] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:21:33.728 [2024-11-20 05:31:48.147934] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xdbbfb0:1 started. 00:21:33.728 [2024-11-20 05:31:48.149606] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:33.728 [2024-11-20 05:31:48.149669] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:33.728 [2024-11-20 05:31:48.149698] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:33.728 [2024-11-20 05:31:48.149715] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:21:33.728 [2024-11-20 05:31:48.149742] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:33.728 05:31:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.728 05:31:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:21:33.728 05:31:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:33.728 05:31:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:33.728 05:31:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:33.728 05:31:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:33.728 [2024-11-20 05:31:48.155055] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xdbbfb0 was disconnected and freed. delete nvme_qpair. 00:21:33.728 05:31:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:33.728 05:31:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.728 05:31:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:33.728 05:31:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.729 05:31:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:21:33.729 05:31:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:21:33.729 05:31:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:21:33.729 05:31:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:21:33.729 05:31:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:33.729 05:31:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:33.729 05:31:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:33.729 05:31:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:33.729 05:31:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:33.729 05:31:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.729 05:31:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:33.987 05:31:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.987 05:31:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:33.987 05:31:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:34.923 05:31:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:34.923 05:31:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:34.923 05:31:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.923 05:31:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:34.923 05:31:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:34.923 05:31:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:34.923 05:31:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:34.923 05:31:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.923 05:31:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:34.923 05:31:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:35.858 05:31:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:35.858 05:31:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:35.858 05:31:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.858 05:31:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:35.859 05:31:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:35.859 05:31:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:35.859 05:31:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:35.859 05:31:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.117 05:31:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:36.117 05:31:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:37.053 05:31:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:37.053 05:31:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:37.053 05:31:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:37.053 05:31:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.053 05:31:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:37.053 05:31:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:37.053 05:31:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:37.053 05:31:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.053 05:31:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:37.053 05:31:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:37.988 05:31:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:37.988 05:31:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:37.988 05:31:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:37.988 05:31:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.988 05:31:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:37.988 05:31:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:37.988 05:31:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:37.988 05:31:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.988 05:31:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:37.988 05:31:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:39.364 05:31:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:39.364 05:31:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:39.364 05:31:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:39.364 05:31:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.364 05:31:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:39.364 05:31:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:39.364 05:31:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:39.364 05:31:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.364 05:31:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:39.364 05:31:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:39.364 [2024-11-20 05:31:53.577477] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:21:39.364 [2024-11-20 05:31:53.578041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.364 [2024-11-20 05:31:53.578174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.364 [2024-11-20 05:31:53.578264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.364 [2024-11-20 05:31:53.578348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.364 [2024-11-20 05:31:53.578414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.364 [2024-11-20 05:31:53.578488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.364 [2024-11-20 05:31:53.578569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.364 [2024-11-20 05:31:53.578651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.364 [2024-11-20 05:31:53.578719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.364 [2024-11-20 05:31:53.578793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.364 [2024-11-20 05:31:53.578873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98240 is same with the state(6) to be set 00:21:39.364 [2024-11-20 05:31:53.587467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd98240 (9): Bad file descriptor 00:21:39.364 [2024-11-20 05:31:53.597490] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:39.364 [2024-11-20 05:31:53.597522] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:39.364 [2024-11-20 05:31:53.597533] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:39.364 [2024-11-20 05:31:53.597540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:39.364 [2024-11-20 05:31:53.597669] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:40.299 05:31:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:40.299 05:31:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:40.299 05:31:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:40.299 05:31:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:40.299 05:31:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.299 05:31:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:40.299 05:31:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:40.299 [2024-11-20 05:31:54.627973] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:21:40.299 [2024-11-20 05:31:54.628093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd98240 with addr=10.0.0.3, port=4420 00:21:40.299 [2024-11-20 05:31:54.628124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98240 is same with the state(6) to be set 00:21:40.299 [2024-11-20 05:31:54.628186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd98240 (9): Bad file descriptor 00:21:40.299 [2024-11-20 05:31:54.628990] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:21:40.299 [2024-11-20 05:31:54.629063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:40.299 [2024-11-20 05:31:54.629085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:40.299 [2024-11-20 05:31:54.629106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:40.299 [2024-11-20 05:31:54.629124] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:40.299 [2024-11-20 05:31:54.629136] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:40.299 [2024-11-20 05:31:54.629146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:40.299 [2024-11-20 05:31:54.629164] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:40.299 [2024-11-20 05:31:54.629175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:40.299 05:31:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.299 05:31:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:40.299 05:31:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:41.267 [2024-11-20 05:31:55.629227] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:41.267 [2024-11-20 05:31:55.629287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:41.267 [2024-11-20 05:31:55.629322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:41.267 [2024-11-20 05:31:55.629335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:41.267 [2024-11-20 05:31:55.629345] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:21:41.267 [2024-11-20 05:31:55.629355] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:41.267 [2024-11-20 05:31:55.629362] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:41.267 [2024-11-20 05:31:55.629367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:41.267 [2024-11-20 05:31:55.629402] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:21:41.267 [2024-11-20 05:31:55.629458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.267 [2024-11-20 05:31:55.629473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.267 [2024-11-20 05:31:55.629487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.267 [2024-11-20 05:31:55.629496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.267 [2024-11-20 05:31:55.629506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.267 [2024-11-20 05:31:55.629515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.267 [2024-11-20 05:31:55.629525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.267 [2024-11-20 05:31:55.629534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.267 [2024-11-20 05:31:55.629544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.267 [2024-11-20 05:31:55.629553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.267 [2024-11-20 05:31:55.629562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:21:41.267 [2024-11-20 05:31:55.629604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd23a20 (9): Bad file descriptor 00:21:41.267 [2024-11-20 05:31:55.630595] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:21:41.267 [2024-11-20 05:31:55.630621] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:21:41.267 05:31:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:41.267 05:31:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:41.267 05:31:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:41.267 05:31:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.267 05:31:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:41.267 05:31:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:41.267 05:31:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:41.267 05:31:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.267 05:31:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:21:41.267 05:31:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:41.267 05:31:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:41.267 05:31:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:21:41.267 05:31:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:41.267 05:31:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:41.267 05:31:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.267 05:31:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:41.267 05:31:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:41.267 05:31:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:41.267 05:31:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:41.267 05:31:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.526 05:31:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:41.526 05:31:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:42.462 05:31:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:42.462 05:31:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:42.462 05:31:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.462 05:31:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:42.462 05:31:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:42.462 05:31:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:42.462 05:31:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:42.462 05:31:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.462 05:31:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:42.462 05:31:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:43.398 [2024-11-20 05:31:57.641933] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:21:43.398 [2024-11-20 05:31:57.641980] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:21:43.398 [2024-11-20 05:31:57.642011] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:43.398 [2024-11-20 05:31:57.648004] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:21:43.398 [2024-11-20 05:31:57.702479] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:21:43.398 [2024-11-20 05:31:57.703303] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xd749f0:1 started. 00:21:43.398 [2024-11-20 05:31:57.704543] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:43.398 [2024-11-20 05:31:57.704591] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:43.398 [2024-11-20 05:31:57.704616] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:43.398 [2024-11-20 05:31:57.704632] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:21:43.398 [2024-11-20 05:31:57.704643] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:43.398 [2024-11-20 05:31:57.710081] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xd749f0 was disconnected and freed. delete nvme_qpair. 00:21:43.398 05:31:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:43.398 05:31:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:43.398 05:31:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.398 05:31:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:43.398 05:31:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:43.398 05:31:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:43.398 05:31:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:43.398 05:31:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.656 05:31:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:21:43.656 05:31:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:21:43.656 05:31:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77839 00:21:43.656 05:31:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 77839 ']' 00:21:43.656 05:31:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 77839 00:21:43.656 05:31:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:21:43.656 05:31:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:43.657 05:31:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77839 00:21:43.657 killing process with pid 77839 00:21:43.657 05:31:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:43.657 05:31:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:43.657 05:31:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77839' 00:21:43.657 05:31:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 77839 00:21:43.657 05:31:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 77839 00:21:43.657 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:21:43.657 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:43.657 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:21:43.657 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:43.657 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:21:43.657 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:43.657 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:43.657 rmmod nvme_tcp 00:21:43.915 rmmod nvme_fabrics 00:21:43.915 rmmod nvme_keyring 00:21:43.915 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:43.915 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:21:43.915 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:21:43.915 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77813 ']' 00:21:43.915 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77813 00:21:43.915 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 77813 ']' 00:21:43.915 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 77813 00:21:43.915 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:21:43.915 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:43.915 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77813 00:21:43.915 killing process with pid 77813 00:21:43.915 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:43.915 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:43.915 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77813' 00:21:43.915 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 77813 00:21:43.915 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 77813 00:21:43.915 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:43.915 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:43.915 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:43.915 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:21:43.915 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:21:43.915 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:43.915 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:21:43.915 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:43.915 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:43.915 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:43.915 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:44.174 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:44.174 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:44.174 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:44.174 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:44.174 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:44.174 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:44.174 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:44.174 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:44.174 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:44.174 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:44.174 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:44.174 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:44.174 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.174 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:44.174 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.174 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:21:44.174 00:21:44.174 real 0m12.945s 00:21:44.174 user 0m21.999s 00:21:44.174 sys 0m2.418s 00:21:44.174 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:44.174 05:31:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:44.174 ************************************ 00:21:44.174 END TEST nvmf_discovery_remove_ifc 00:21:44.174 ************************************ 00:21:44.174 05:31:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:21:44.174 05:31:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:44.174 05:31:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:44.174 05:31:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.174 ************************************ 00:21:44.174 START TEST nvmf_identify_kernel_target 00:21:44.174 ************************************ 00:21:44.174 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:21:44.435 * Looking for test storage... 00:21:44.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:44.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.435 --rc genhtml_branch_coverage=1 00:21:44.435 --rc genhtml_function_coverage=1 00:21:44.435 --rc genhtml_legend=1 00:21:44.435 --rc geninfo_all_blocks=1 00:21:44.435 --rc geninfo_unexecuted_blocks=1 00:21:44.435 00:21:44.435 ' 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:44.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.435 --rc genhtml_branch_coverage=1 00:21:44.435 --rc genhtml_function_coverage=1 00:21:44.435 --rc genhtml_legend=1 00:21:44.435 --rc geninfo_all_blocks=1 00:21:44.435 --rc geninfo_unexecuted_blocks=1 00:21:44.435 00:21:44.435 ' 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:44.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.435 --rc genhtml_branch_coverage=1 00:21:44.435 --rc genhtml_function_coverage=1 00:21:44.435 --rc genhtml_legend=1 00:21:44.435 --rc geninfo_all_blocks=1 00:21:44.435 --rc geninfo_unexecuted_blocks=1 00:21:44.435 00:21:44.435 ' 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:44.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.435 --rc genhtml_branch_coverage=1 00:21:44.435 --rc genhtml_function_coverage=1 00:21:44.435 --rc genhtml_legend=1 00:21:44.435 --rc geninfo_all_blocks=1 00:21:44.435 --rc geninfo_unexecuted_blocks=1 00:21:44.435 00:21:44.435 ' 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.435 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:44.436 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:44.436 Cannot find device "nvmf_init_br" 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:44.436 Cannot find device "nvmf_init_br2" 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:44.436 Cannot find device "nvmf_tgt_br" 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:44.436 Cannot find device "nvmf_tgt_br2" 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:44.436 Cannot find device "nvmf_init_br" 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:44.436 Cannot find device "nvmf_init_br2" 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:21:44.436 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:44.695 Cannot find device "nvmf_tgt_br" 00:21:44.695 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:21:44.695 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:44.695 Cannot find device "nvmf_tgt_br2" 00:21:44.695 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:21:44.695 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:44.695 Cannot find device "nvmf_br" 00:21:44.695 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:21:44.695 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:44.695 Cannot find device "nvmf_init_if" 00:21:44.695 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:21:44.695 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:44.695 Cannot find device "nvmf_init_if2" 00:21:44.695 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:21:44.695 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:44.695 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:44.695 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:21:44.695 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:44.695 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:44.695 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:21:44.695 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:44.695 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:44.695 05:31:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:44.695 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:44.695 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:44.695 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:44.695 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:44.695 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:44.695 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:44.695 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:44.695 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:44.695 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:44.695 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:44.695 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:44.695 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:44.695 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:44.695 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:44.695 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:44.695 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:44.695 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:44.695 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:44.695 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:44.954 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:44.954 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:21:44.954 00:21:44.954 --- 10.0.0.3 ping statistics --- 00:21:44.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.954 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:44.954 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:44.954 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:21:44.954 00:21:44.954 --- 10.0.0.4 ping statistics --- 00:21:44.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.954 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:44.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:44.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:21:44.954 00:21:44.954 --- 10.0.0.1 ping statistics --- 00:21:44.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.954 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:44.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:44.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:21:44.954 00:21:44.954 --- 10.0.0.2 ping statistics --- 00:21:44.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.954 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:44.954 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:45.213 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:45.213 Waiting for block devices as requested 00:21:45.213 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:45.471 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:45.471 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:45.471 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:45.471 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:21:45.471 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:21:45.471 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:45.471 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:45.471 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:21:45.471 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:45.471 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:45.471 No valid GPT data, bailing 00:21:45.471 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:45.471 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:45.471 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:45.471 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:21:45.471 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:45.471 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:45.471 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:21:45.471 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:21:45.471 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:45.471 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:45.471 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:21:45.471 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:45.471 05:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:45.729 No valid GPT data, bailing 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:45.729 No valid GPT data, bailing 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:45.729 No valid GPT data, bailing 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:21:45.729 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:45.989 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid=4bd82fc4-6e19-4d22-95c5-23a13095cd93 -a 10.0.0.1 -t tcp -s 4420 00:21:45.989 00:21:45.989 Discovery Log Number of Records 2, Generation counter 2 00:21:45.989 =====Discovery Log Entry 0====== 00:21:45.989 trtype: tcp 00:21:45.989 adrfam: ipv4 00:21:45.989 subtype: current discovery subsystem 00:21:45.989 treq: not specified, sq flow control disable supported 00:21:45.989 portid: 1 00:21:45.989 trsvcid: 4420 00:21:45.989 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:45.989 traddr: 10.0.0.1 00:21:45.989 eflags: none 00:21:45.989 sectype: none 00:21:45.989 =====Discovery Log Entry 1====== 00:21:45.989 trtype: tcp 00:21:45.989 adrfam: ipv4 00:21:45.989 subtype: nvme subsystem 00:21:45.989 treq: not specified, sq flow control disable supported 00:21:45.989 portid: 1 00:21:45.989 trsvcid: 4420 00:21:45.989 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:45.989 traddr: 10.0.0.1 00:21:45.989 eflags: none 00:21:45.989 sectype: none 00:21:45.989 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:21:45.989 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:21:45.989 ===================================================== 00:21:45.989 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:45.989 ===================================================== 00:21:45.989 Controller Capabilities/Features 00:21:45.989 ================================ 00:21:45.989 Vendor ID: 0000 00:21:45.989 Subsystem Vendor ID: 0000 00:21:45.989 Serial Number: 5a5efde558dae546eba3 00:21:45.989 Model Number: Linux 00:21:45.989 Firmware Version: 6.8.9-20 00:21:45.989 Recommended Arb Burst: 0 00:21:45.989 IEEE OUI Identifier: 00 00 00 00:21:45.989 Multi-path I/O 00:21:45.989 May have multiple subsystem ports: No 00:21:45.989 May have multiple controllers: No 00:21:45.989 Associated with SR-IOV VF: No 00:21:45.989 Max Data Transfer Size: Unlimited 00:21:45.989 Max Number of Namespaces: 0 00:21:45.989 Max Number of I/O Queues: 1024 00:21:45.989 NVMe Specification Version (VS): 1.3 00:21:45.989 NVMe Specification Version (Identify): 1.3 00:21:45.989 Maximum Queue Entries: 1024 00:21:45.989 Contiguous Queues Required: No 00:21:45.989 Arbitration Mechanisms Supported 00:21:45.989 Weighted Round Robin: Not Supported 00:21:45.989 Vendor Specific: Not Supported 00:21:45.989 Reset Timeout: 7500 ms 00:21:45.989 Doorbell Stride: 4 bytes 00:21:45.989 NVM Subsystem Reset: Not Supported 00:21:45.989 Command Sets Supported 00:21:45.989 NVM Command Set: Supported 00:21:45.989 Boot Partition: Not Supported 00:21:45.989 Memory Page Size Minimum: 4096 bytes 00:21:45.989 Memory Page Size Maximum: 4096 bytes 00:21:45.989 Persistent Memory Region: Not Supported 00:21:45.989 Optional Asynchronous Events Supported 00:21:45.989 Namespace Attribute Notices: Not Supported 00:21:45.989 Firmware Activation Notices: Not Supported 00:21:45.989 ANA Change Notices: Not Supported 00:21:45.989 PLE Aggregate Log Change Notices: Not Supported 00:21:45.989 LBA Status Info Alert Notices: Not Supported 00:21:45.989 EGE Aggregate Log Change Notices: Not Supported 00:21:45.989 Normal NVM Subsystem Shutdown event: Not Supported 00:21:45.989 Zone Descriptor Change Notices: Not Supported 00:21:45.989 Discovery Log Change Notices: Supported 00:21:45.989 Controller Attributes 00:21:45.989 128-bit Host Identifier: Not Supported 00:21:45.989 Non-Operational Permissive Mode: Not Supported 00:21:45.989 NVM Sets: Not Supported 00:21:45.989 Read Recovery Levels: Not Supported 00:21:45.989 Endurance Groups: Not Supported 00:21:45.989 Predictable Latency Mode: Not Supported 00:21:45.989 Traffic Based Keep ALive: Not Supported 00:21:45.989 Namespace Granularity: Not Supported 00:21:45.989 SQ Associations: Not Supported 00:21:45.989 UUID List: Not Supported 00:21:45.989 Multi-Domain Subsystem: Not Supported 00:21:45.989 Fixed Capacity Management: Not Supported 00:21:45.989 Variable Capacity Management: Not Supported 00:21:45.989 Delete Endurance Group: Not Supported 00:21:45.989 Delete NVM Set: Not Supported 00:21:45.989 Extended LBA Formats Supported: Not Supported 00:21:45.989 Flexible Data Placement Supported: Not Supported 00:21:45.989 00:21:45.989 Controller Memory Buffer Support 00:21:45.989 ================================ 00:21:45.989 Supported: No 00:21:45.989 00:21:45.989 Persistent Memory Region Support 00:21:45.989 ================================ 00:21:45.989 Supported: No 00:21:45.989 00:21:45.989 Admin Command Set Attributes 00:21:45.989 ============================ 00:21:45.989 Security Send/Receive: Not Supported 00:21:45.989 Format NVM: Not Supported 00:21:45.989 Firmware Activate/Download: Not Supported 00:21:45.989 Namespace Management: Not Supported 00:21:45.989 Device Self-Test: Not Supported 00:21:45.989 Directives: Not Supported 00:21:45.989 NVMe-MI: Not Supported 00:21:45.989 Virtualization Management: Not Supported 00:21:45.989 Doorbell Buffer Config: Not Supported 00:21:45.989 Get LBA Status Capability: Not Supported 00:21:45.989 Command & Feature Lockdown Capability: Not Supported 00:21:45.989 Abort Command Limit: 1 00:21:45.989 Async Event Request Limit: 1 00:21:45.989 Number of Firmware Slots: N/A 00:21:45.989 Firmware Slot 1 Read-Only: N/A 00:21:45.989 Firmware Activation Without Reset: N/A 00:21:45.989 Multiple Update Detection Support: N/A 00:21:45.989 Firmware Update Granularity: No Information Provided 00:21:45.989 Per-Namespace SMART Log: No 00:21:45.989 Asymmetric Namespace Access Log Page: Not Supported 00:21:45.989 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:45.989 Command Effects Log Page: Not Supported 00:21:45.989 Get Log Page Extended Data: Supported 00:21:45.989 Telemetry Log Pages: Not Supported 00:21:45.989 Persistent Event Log Pages: Not Supported 00:21:45.989 Supported Log Pages Log Page: May Support 00:21:45.989 Commands Supported & Effects Log Page: Not Supported 00:21:45.989 Feature Identifiers & Effects Log Page:May Support 00:21:45.989 NVMe-MI Commands & Effects Log Page: May Support 00:21:45.989 Data Area 4 for Telemetry Log: Not Supported 00:21:45.989 Error Log Page Entries Supported: 1 00:21:45.989 Keep Alive: Not Supported 00:21:45.989 00:21:45.989 NVM Command Set Attributes 00:21:45.989 ========================== 00:21:45.989 Submission Queue Entry Size 00:21:45.989 Max: 1 00:21:45.989 Min: 1 00:21:45.989 Completion Queue Entry Size 00:21:45.989 Max: 1 00:21:45.989 Min: 1 00:21:45.989 Number of Namespaces: 0 00:21:45.989 Compare Command: Not Supported 00:21:45.989 Write Uncorrectable Command: Not Supported 00:21:45.989 Dataset Management Command: Not Supported 00:21:45.989 Write Zeroes Command: Not Supported 00:21:45.989 Set Features Save Field: Not Supported 00:21:45.989 Reservations: Not Supported 00:21:45.989 Timestamp: Not Supported 00:21:45.989 Copy: Not Supported 00:21:45.989 Volatile Write Cache: Not Present 00:21:45.989 Atomic Write Unit (Normal): 1 00:21:45.989 Atomic Write Unit (PFail): 1 00:21:45.989 Atomic Compare & Write Unit: 1 00:21:45.989 Fused Compare & Write: Not Supported 00:21:45.989 Scatter-Gather List 00:21:45.989 SGL Command Set: Supported 00:21:45.989 SGL Keyed: Not Supported 00:21:45.989 SGL Bit Bucket Descriptor: Not Supported 00:21:45.989 SGL Metadata Pointer: Not Supported 00:21:45.989 Oversized SGL: Not Supported 00:21:45.989 SGL Metadata Address: Not Supported 00:21:45.989 SGL Offset: Supported 00:21:45.989 Transport SGL Data Block: Not Supported 00:21:45.989 Replay Protected Memory Block: Not Supported 00:21:45.989 00:21:45.989 Firmware Slot Information 00:21:45.989 ========================= 00:21:45.989 Active slot: 0 00:21:45.989 00:21:45.989 00:21:45.989 Error Log 00:21:45.989 ========= 00:21:45.989 00:21:45.989 Active Namespaces 00:21:45.989 ================= 00:21:45.989 Discovery Log Page 00:21:45.989 ================== 00:21:45.989 Generation Counter: 2 00:21:45.989 Number of Records: 2 00:21:45.990 Record Format: 0 00:21:45.990 00:21:45.990 Discovery Log Entry 0 00:21:45.990 ---------------------- 00:21:45.990 Transport Type: 3 (TCP) 00:21:45.990 Address Family: 1 (IPv4) 00:21:45.990 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:45.990 Entry Flags: 00:21:45.990 Duplicate Returned Information: 0 00:21:45.990 Explicit Persistent Connection Support for Discovery: 0 00:21:45.990 Transport Requirements: 00:21:45.990 Secure Channel: Not Specified 00:21:45.990 Port ID: 1 (0x0001) 00:21:45.990 Controller ID: 65535 (0xffff) 00:21:45.990 Admin Max SQ Size: 32 00:21:45.990 Transport Service Identifier: 4420 00:21:45.990 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:45.990 Transport Address: 10.0.0.1 00:21:45.990 Discovery Log Entry 1 00:21:45.990 ---------------------- 00:21:45.990 Transport Type: 3 (TCP) 00:21:45.990 Address Family: 1 (IPv4) 00:21:45.990 Subsystem Type: 2 (NVM Subsystem) 00:21:45.990 Entry Flags: 00:21:45.990 Duplicate Returned Information: 0 00:21:45.990 Explicit Persistent Connection Support for Discovery: 0 00:21:45.990 Transport Requirements: 00:21:45.990 Secure Channel: Not Specified 00:21:45.990 Port ID: 1 (0x0001) 00:21:45.990 Controller ID: 65535 (0xffff) 00:21:45.990 Admin Max SQ Size: 32 00:21:45.990 Transport Service Identifier: 4420 00:21:45.990 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:21:45.990 Transport Address: 10.0.0.1 00:21:45.990 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:46.308 get_feature(0x01) failed 00:21:46.308 get_feature(0x02) failed 00:21:46.308 get_feature(0x04) failed 00:21:46.308 ===================================================== 00:21:46.308 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:46.308 ===================================================== 00:21:46.308 Controller Capabilities/Features 00:21:46.308 ================================ 00:21:46.308 Vendor ID: 0000 00:21:46.308 Subsystem Vendor ID: 0000 00:21:46.308 Serial Number: a6ecc7075d9a7410bd08 00:21:46.308 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:21:46.308 Firmware Version: 6.8.9-20 00:21:46.308 Recommended Arb Burst: 6 00:21:46.308 IEEE OUI Identifier: 00 00 00 00:21:46.308 Multi-path I/O 00:21:46.308 May have multiple subsystem ports: Yes 00:21:46.308 May have multiple controllers: Yes 00:21:46.308 Associated with SR-IOV VF: No 00:21:46.308 Max Data Transfer Size: Unlimited 00:21:46.308 Max Number of Namespaces: 1024 00:21:46.308 Max Number of I/O Queues: 128 00:21:46.308 NVMe Specification Version (VS): 1.3 00:21:46.308 NVMe Specification Version (Identify): 1.3 00:21:46.308 Maximum Queue Entries: 1024 00:21:46.308 Contiguous Queues Required: No 00:21:46.308 Arbitration Mechanisms Supported 00:21:46.308 Weighted Round Robin: Not Supported 00:21:46.308 Vendor Specific: Not Supported 00:21:46.308 Reset Timeout: 7500 ms 00:21:46.308 Doorbell Stride: 4 bytes 00:21:46.308 NVM Subsystem Reset: Not Supported 00:21:46.308 Command Sets Supported 00:21:46.308 NVM Command Set: Supported 00:21:46.308 Boot Partition: Not Supported 00:21:46.308 Memory Page Size Minimum: 4096 bytes 00:21:46.308 Memory Page Size Maximum: 4096 bytes 00:21:46.308 Persistent Memory Region: Not Supported 00:21:46.308 Optional Asynchronous Events Supported 00:21:46.308 Namespace Attribute Notices: Supported 00:21:46.308 Firmware Activation Notices: Not Supported 00:21:46.308 ANA Change Notices: Supported 00:21:46.308 PLE Aggregate Log Change Notices: Not Supported 00:21:46.308 LBA Status Info Alert Notices: Not Supported 00:21:46.308 EGE Aggregate Log Change Notices: Not Supported 00:21:46.308 Normal NVM Subsystem Shutdown event: Not Supported 00:21:46.308 Zone Descriptor Change Notices: Not Supported 00:21:46.308 Discovery Log Change Notices: Not Supported 00:21:46.308 Controller Attributes 00:21:46.308 128-bit Host Identifier: Supported 00:21:46.308 Non-Operational Permissive Mode: Not Supported 00:21:46.308 NVM Sets: Not Supported 00:21:46.308 Read Recovery Levels: Not Supported 00:21:46.308 Endurance Groups: Not Supported 00:21:46.308 Predictable Latency Mode: Not Supported 00:21:46.308 Traffic Based Keep ALive: Supported 00:21:46.308 Namespace Granularity: Not Supported 00:21:46.308 SQ Associations: Not Supported 00:21:46.308 UUID List: Not Supported 00:21:46.308 Multi-Domain Subsystem: Not Supported 00:21:46.308 Fixed Capacity Management: Not Supported 00:21:46.308 Variable Capacity Management: Not Supported 00:21:46.308 Delete Endurance Group: Not Supported 00:21:46.308 Delete NVM Set: Not Supported 00:21:46.308 Extended LBA Formats Supported: Not Supported 00:21:46.308 Flexible Data Placement Supported: Not Supported 00:21:46.309 00:21:46.309 Controller Memory Buffer Support 00:21:46.309 ================================ 00:21:46.309 Supported: No 00:21:46.309 00:21:46.309 Persistent Memory Region Support 00:21:46.309 ================================ 00:21:46.309 Supported: No 00:21:46.309 00:21:46.309 Admin Command Set Attributes 00:21:46.309 ============================ 00:21:46.309 Security Send/Receive: Not Supported 00:21:46.309 Format NVM: Not Supported 00:21:46.309 Firmware Activate/Download: Not Supported 00:21:46.309 Namespace Management: Not Supported 00:21:46.309 Device Self-Test: Not Supported 00:21:46.309 Directives: Not Supported 00:21:46.309 NVMe-MI: Not Supported 00:21:46.309 Virtualization Management: Not Supported 00:21:46.309 Doorbell Buffer Config: Not Supported 00:21:46.309 Get LBA Status Capability: Not Supported 00:21:46.309 Command & Feature Lockdown Capability: Not Supported 00:21:46.309 Abort Command Limit: 4 00:21:46.309 Async Event Request Limit: 4 00:21:46.309 Number of Firmware Slots: N/A 00:21:46.309 Firmware Slot 1 Read-Only: N/A 00:21:46.309 Firmware Activation Without Reset: N/A 00:21:46.309 Multiple Update Detection Support: N/A 00:21:46.309 Firmware Update Granularity: No Information Provided 00:21:46.309 Per-Namespace SMART Log: Yes 00:21:46.309 Asymmetric Namespace Access Log Page: Supported 00:21:46.309 ANA Transition Time : 10 sec 00:21:46.309 00:21:46.309 Asymmetric Namespace Access Capabilities 00:21:46.309 ANA Optimized State : Supported 00:21:46.309 ANA Non-Optimized State : Supported 00:21:46.309 ANA Inaccessible State : Supported 00:21:46.309 ANA Persistent Loss State : Supported 00:21:46.309 ANA Change State : Supported 00:21:46.309 ANAGRPID is not changed : No 00:21:46.309 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:21:46.309 00:21:46.309 ANA Group Identifier Maximum : 128 00:21:46.309 Number of ANA Group Identifiers : 128 00:21:46.309 Max Number of Allowed Namespaces : 1024 00:21:46.309 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:21:46.309 Command Effects Log Page: Supported 00:21:46.309 Get Log Page Extended Data: Supported 00:21:46.309 Telemetry Log Pages: Not Supported 00:21:46.309 Persistent Event Log Pages: Not Supported 00:21:46.309 Supported Log Pages Log Page: May Support 00:21:46.309 Commands Supported & Effects Log Page: Not Supported 00:21:46.309 Feature Identifiers & Effects Log Page:May Support 00:21:46.309 NVMe-MI Commands & Effects Log Page: May Support 00:21:46.309 Data Area 4 for Telemetry Log: Not Supported 00:21:46.309 Error Log Page Entries Supported: 128 00:21:46.309 Keep Alive: Supported 00:21:46.309 Keep Alive Granularity: 1000 ms 00:21:46.309 00:21:46.309 NVM Command Set Attributes 00:21:46.309 ========================== 00:21:46.309 Submission Queue Entry Size 00:21:46.309 Max: 64 00:21:46.309 Min: 64 00:21:46.309 Completion Queue Entry Size 00:21:46.309 Max: 16 00:21:46.309 Min: 16 00:21:46.309 Number of Namespaces: 1024 00:21:46.309 Compare Command: Not Supported 00:21:46.309 Write Uncorrectable Command: Not Supported 00:21:46.309 Dataset Management Command: Supported 00:21:46.309 Write Zeroes Command: Supported 00:21:46.309 Set Features Save Field: Not Supported 00:21:46.309 Reservations: Not Supported 00:21:46.309 Timestamp: Not Supported 00:21:46.309 Copy: Not Supported 00:21:46.309 Volatile Write Cache: Present 00:21:46.309 Atomic Write Unit (Normal): 1 00:21:46.309 Atomic Write Unit (PFail): 1 00:21:46.309 Atomic Compare & Write Unit: 1 00:21:46.309 Fused Compare & Write: Not Supported 00:21:46.309 Scatter-Gather List 00:21:46.309 SGL Command Set: Supported 00:21:46.309 SGL Keyed: Not Supported 00:21:46.309 SGL Bit Bucket Descriptor: Not Supported 00:21:46.309 SGL Metadata Pointer: Not Supported 00:21:46.309 Oversized SGL: Not Supported 00:21:46.309 SGL Metadata Address: Not Supported 00:21:46.309 SGL Offset: Supported 00:21:46.309 Transport SGL Data Block: Not Supported 00:21:46.309 Replay Protected Memory Block: Not Supported 00:21:46.309 00:21:46.309 Firmware Slot Information 00:21:46.309 ========================= 00:21:46.309 Active slot: 0 00:21:46.309 00:21:46.309 Asymmetric Namespace Access 00:21:46.309 =========================== 00:21:46.309 Change Count : 0 00:21:46.309 Number of ANA Group Descriptors : 1 00:21:46.309 ANA Group Descriptor : 0 00:21:46.309 ANA Group ID : 1 00:21:46.309 Number of NSID Values : 1 00:21:46.309 Change Count : 0 00:21:46.309 ANA State : 1 00:21:46.309 Namespace Identifier : 1 00:21:46.309 00:21:46.309 Commands Supported and Effects 00:21:46.309 ============================== 00:21:46.309 Admin Commands 00:21:46.309 -------------- 00:21:46.309 Get Log Page (02h): Supported 00:21:46.309 Identify (06h): Supported 00:21:46.309 Abort (08h): Supported 00:21:46.309 Set Features (09h): Supported 00:21:46.309 Get Features (0Ah): Supported 00:21:46.309 Asynchronous Event Request (0Ch): Supported 00:21:46.309 Keep Alive (18h): Supported 00:21:46.309 I/O Commands 00:21:46.309 ------------ 00:21:46.309 Flush (00h): Supported 00:21:46.309 Write (01h): Supported LBA-Change 00:21:46.309 Read (02h): Supported 00:21:46.309 Write Zeroes (08h): Supported LBA-Change 00:21:46.309 Dataset Management (09h): Supported 00:21:46.309 00:21:46.309 Error Log 00:21:46.309 ========= 00:21:46.309 Entry: 0 00:21:46.309 Error Count: 0x3 00:21:46.309 Submission Queue Id: 0x0 00:21:46.309 Command Id: 0x5 00:21:46.309 Phase Bit: 0 00:21:46.309 Status Code: 0x2 00:21:46.309 Status Code Type: 0x0 00:21:46.309 Do Not Retry: 1 00:21:46.309 Error Location: 0x28 00:21:46.309 LBA: 0x0 00:21:46.309 Namespace: 0x0 00:21:46.309 Vendor Log Page: 0x0 00:21:46.309 ----------- 00:21:46.309 Entry: 1 00:21:46.309 Error Count: 0x2 00:21:46.309 Submission Queue Id: 0x0 00:21:46.309 Command Id: 0x5 00:21:46.309 Phase Bit: 0 00:21:46.309 Status Code: 0x2 00:21:46.309 Status Code Type: 0x0 00:21:46.309 Do Not Retry: 1 00:21:46.309 Error Location: 0x28 00:21:46.309 LBA: 0x0 00:21:46.309 Namespace: 0x0 00:21:46.309 Vendor Log Page: 0x0 00:21:46.309 ----------- 00:21:46.309 Entry: 2 00:21:46.309 Error Count: 0x1 00:21:46.309 Submission Queue Id: 0x0 00:21:46.309 Command Id: 0x4 00:21:46.309 Phase Bit: 0 00:21:46.309 Status Code: 0x2 00:21:46.309 Status Code Type: 0x0 00:21:46.309 Do Not Retry: 1 00:21:46.309 Error Location: 0x28 00:21:46.309 LBA: 0x0 00:21:46.309 Namespace: 0x0 00:21:46.309 Vendor Log Page: 0x0 00:21:46.309 00:21:46.309 Number of Queues 00:21:46.309 ================ 00:21:46.309 Number of I/O Submission Queues: 128 00:21:46.309 Number of I/O Completion Queues: 128 00:21:46.309 00:21:46.309 ZNS Specific Controller Data 00:21:46.309 ============================ 00:21:46.309 Zone Append Size Limit: 0 00:21:46.309 00:21:46.309 00:21:46.309 Active Namespaces 00:21:46.309 ================= 00:21:46.309 get_feature(0x05) failed 00:21:46.309 Namespace ID:1 00:21:46.309 Command Set Identifier: NVM (00h) 00:21:46.309 Deallocate: Supported 00:21:46.309 Deallocated/Unwritten Error: Not Supported 00:21:46.309 Deallocated Read Value: Unknown 00:21:46.309 Deallocate in Write Zeroes: Not Supported 00:21:46.309 Deallocated Guard Field: 0xFFFF 00:21:46.309 Flush: Supported 00:21:46.309 Reservation: Not Supported 00:21:46.309 Namespace Sharing Capabilities: Multiple Controllers 00:21:46.309 Size (in LBAs): 1310720 (5GiB) 00:21:46.309 Capacity (in LBAs): 1310720 (5GiB) 00:21:46.309 Utilization (in LBAs): 1310720 (5GiB) 00:21:46.309 UUID: e7312cec-06c0-443d-9c53-f60acee21c42 00:21:46.309 Thin Provisioning: Not Supported 00:21:46.309 Per-NS Atomic Units: Yes 00:21:46.309 Atomic Boundary Size (Normal): 0 00:21:46.309 Atomic Boundary Size (PFail): 0 00:21:46.309 Atomic Boundary Offset: 0 00:21:46.309 NGUID/EUI64 Never Reused: No 00:21:46.309 ANA group ID: 1 00:21:46.309 Namespace Write Protected: No 00:21:46.309 Number of LBA Formats: 1 00:21:46.309 Current LBA Format: LBA Format #00 00:21:46.309 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:21:46.309 00:21:46.309 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:21:46.309 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:46.309 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:21:46.309 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:46.309 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:21:46.309 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:46.309 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:46.309 rmmod nvme_tcp 00:21:46.309 rmmod nvme_fabrics 00:21:46.309 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:46.309 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:21:46.309 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:21:46.309 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:21:46.309 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:46.309 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:46.309 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:46.309 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:21:46.309 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:46.309 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:21:46.309 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:21:46.309 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:46.309 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:46.309 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:46.309 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:46.588 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:46.588 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:46.588 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:46.588 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:46.588 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:46.588 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:46.588 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:46.588 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:46.588 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:46.588 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:46.588 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:46.588 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:46.588 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.588 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:46.588 05:32:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.588 05:32:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:21:46.588 05:32:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:21:46.588 05:32:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:46.588 05:32:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:21:46.588 05:32:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:46.588 05:32:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:46.588 05:32:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:46.588 05:32:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:46.589 05:32:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:21:46.589 05:32:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:21:46.589 05:32:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:47.524 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:47.524 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:47.524 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:47.524 ************************************ 00:21:47.524 END TEST nvmf_identify_kernel_target 00:21:47.524 ************************************ 00:21:47.524 00:21:47.524 real 0m3.258s 00:21:47.524 user 0m1.135s 00:21:47.524 sys 0m1.425s 00:21:47.524 05:32:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:47.524 05:32:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.524 05:32:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:21:47.524 05:32:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:47.524 05:32:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:47.524 05:32:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.524 ************************************ 00:21:47.524 START TEST nvmf_auth_host 00:21:47.524 ************************************ 00:21:47.524 05:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:21:47.783 * Looking for test storage... 00:21:47.783 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:47.783 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:47.783 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:21:47.783 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:47.783 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:47.783 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:47.783 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:47.783 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:47.783 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:47.783 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:47.783 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:47.783 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:47.783 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:47.783 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:47.783 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:47.783 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:47.783 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:21:47.783 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:21:47.783 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:47.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.784 --rc genhtml_branch_coverage=1 00:21:47.784 --rc genhtml_function_coverage=1 00:21:47.784 --rc genhtml_legend=1 00:21:47.784 --rc geninfo_all_blocks=1 00:21:47.784 --rc geninfo_unexecuted_blocks=1 00:21:47.784 00:21:47.784 ' 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:47.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.784 --rc genhtml_branch_coverage=1 00:21:47.784 --rc genhtml_function_coverage=1 00:21:47.784 --rc genhtml_legend=1 00:21:47.784 --rc geninfo_all_blocks=1 00:21:47.784 --rc geninfo_unexecuted_blocks=1 00:21:47.784 00:21:47.784 ' 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:47.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.784 --rc genhtml_branch_coverage=1 00:21:47.784 --rc genhtml_function_coverage=1 00:21:47.784 --rc genhtml_legend=1 00:21:47.784 --rc geninfo_all_blocks=1 00:21:47.784 --rc geninfo_unexecuted_blocks=1 00:21:47.784 00:21:47.784 ' 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:47.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.784 --rc genhtml_branch_coverage=1 00:21:47.784 --rc genhtml_function_coverage=1 00:21:47.784 --rc genhtml_legend=1 00:21:47.784 --rc geninfo_all_blocks=1 00:21:47.784 --rc geninfo_unexecuted_blocks=1 00:21:47.784 00:21:47.784 ' 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:47.784 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:47.784 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:47.785 Cannot find device "nvmf_init_br" 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:47.785 Cannot find device "nvmf_init_br2" 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:47.785 Cannot find device "nvmf_tgt_br" 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:47.785 Cannot find device "nvmf_tgt_br2" 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:47.785 Cannot find device "nvmf_init_br" 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:47.785 Cannot find device "nvmf_init_br2" 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:47.785 Cannot find device "nvmf_tgt_br" 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:47.785 Cannot find device "nvmf_tgt_br2" 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:47.785 Cannot find device "nvmf_br" 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:21:47.785 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:48.043 Cannot find device "nvmf_init_if" 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:48.043 Cannot find device "nvmf_init_if2" 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:48.043 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:48.043 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:48.043 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:48.044 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:48.044 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:48.044 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:48.044 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:48.302 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:48.302 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:48.302 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:48.302 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:48.302 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:21:48.302 00:21:48.302 --- 10.0.0.3 ping statistics --- 00:21:48.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.302 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:21:48.302 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:48.302 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:48.302 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:21:48.302 00:21:48.302 --- 10.0.0.4 ping statistics --- 00:21:48.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.302 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:21:48.302 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:48.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:48.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:21:48.302 00:21:48.302 --- 10.0.0.1 ping statistics --- 00:21:48.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.302 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:21:48.302 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:48.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:48.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:21:48.302 00:21:48.302 --- 10.0.0.2 ping statistics --- 00:21:48.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.302 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:21:48.302 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:48.302 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:21:48.302 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:48.302 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:48.302 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:48.302 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:48.302 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:48.302 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:48.302 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:48.302 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:21:48.302 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:48.302 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:48.302 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:48.302 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78814 00:21:48.302 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:21:48.302 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78814 00:21:48.302 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 78814 ']' 00:21:48.302 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.302 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:48.302 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.302 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:48.302 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:48.561 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:48.561 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:21:48.561 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:48.561 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:48.561 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:48.561 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:48.561 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:21:48.561 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:21:48.561 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:48.561 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:48.561 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:48.561 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:21:48.561 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:48.561 05:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:48.561 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9605c45e194cc8e296d9276590aab167 00:21:48.561 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:48.561 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Tv9 00:21:48.561 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9605c45e194cc8e296d9276590aab167 0 00:21:48.561 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9605c45e194cc8e296d9276590aab167 0 00:21:48.561 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:48.561 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:48.561 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9605c45e194cc8e296d9276590aab167 00:21:48.561 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:21:48.561 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:48.561 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Tv9 00:21:48.561 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Tv9 00:21:48.561 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Tv9 00:21:48.561 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:21:48.561 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:48.561 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:48.561 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:48.561 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:21:48.561 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:21:48.561 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:48.561 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=38e5b7a8c042972ccb5d0933cc7e09db8149377ab139fad4d273fee6c9d2ac52 00:21:48.561 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:48.561 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.vbS 00:21:48.561 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 38e5b7a8c042972ccb5d0933cc7e09db8149377ab139fad4d273fee6c9d2ac52 3 00:21:48.819 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 38e5b7a8c042972ccb5d0933cc7e09db8149377ab139fad4d273fee6c9d2ac52 3 00:21:48.819 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:48.819 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:48.819 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=38e5b7a8c042972ccb5d0933cc7e09db8149377ab139fad4d273fee6c9d2ac52 00:21:48.819 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:21:48.819 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:48.819 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.vbS 00:21:48.819 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.vbS 00:21:48.819 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.vbS 00:21:48.819 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:21:48.819 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:48.819 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:48.819 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:48.819 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:21:48.819 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:21:48.819 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:48.819 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cef29e607312bb77547ef651819d17d45e512052b6bd116c 00:21:48.819 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:48.819 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Ypk 00:21:48.819 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cef29e607312bb77547ef651819d17d45e512052b6bd116c 0 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cef29e607312bb77547ef651819d17d45e512052b6bd116c 0 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cef29e607312bb77547ef651819d17d45e512052b6bd116c 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Ypk 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Ypk 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Ypk 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0b5afb3cea2b2d20ab9c0a9846551c1815f9b86c2024d383 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.LwB 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0b5afb3cea2b2d20ab9c0a9846551c1815f9b86c2024d383 2 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0b5afb3cea2b2d20ab9c0a9846551c1815f9b86c2024d383 2 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0b5afb3cea2b2d20ab9c0a9846551c1815f9b86c2024d383 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.LwB 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.LwB 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.LwB 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=906dbc09efd97645de6ca9d442f8cf33 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ttM 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 906dbc09efd97645de6ca9d442f8cf33 1 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 906dbc09efd97645de6ca9d442f8cf33 1 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=906dbc09efd97645de6ca9d442f8cf33 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:21:48.820 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ttM 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ttM 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.ttM 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=35ed189b554a663fccfbeca7c731f89e 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.28K 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 35ed189b554a663fccfbeca7c731f89e 1 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 35ed189b554a663fccfbeca7c731f89e 1 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=35ed189b554a663fccfbeca7c731f89e 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.28K 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.28K 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.28K 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=03663a440f4d6af35f73dcf5a42db70631451cfe9db4d52e 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.4Iz 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 03663a440f4d6af35f73dcf5a42db70631451cfe9db4d52e 2 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 03663a440f4d6af35f73dcf5a42db70631451cfe9db4d52e 2 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=03663a440f4d6af35f73dcf5a42db70631451cfe9db4d52e 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.4Iz 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.4Iz 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.4Iz 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=27f6065d7ebae227eec6013c5c31247b 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.2X8 00:21:49.079 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 27f6065d7ebae227eec6013c5c31247b 0 00:21:49.080 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 27f6065d7ebae227eec6013c5c31247b 0 00:21:49.080 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:49.080 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:49.080 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=27f6065d7ebae227eec6013c5c31247b 00:21:49.080 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:21:49.080 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:49.080 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.2X8 00:21:49.080 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.2X8 00:21:49.080 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.2X8 00:21:49.080 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:21:49.080 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:49.080 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:49.080 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:49.080 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:21:49.080 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:21:49.080 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:49.080 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4eee2589188a8fa9d3bac60e18f68ab22b19426018a61ac8b86d766cd0acfc9d 00:21:49.080 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:49.080 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.4Gb 00:21:49.080 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4eee2589188a8fa9d3bac60e18f68ab22b19426018a61ac8b86d766cd0acfc9d 3 00:21:49.080 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4eee2589188a8fa9d3bac60e18f68ab22b19426018a61ac8b86d766cd0acfc9d 3 00:21:49.080 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:49.080 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:49.080 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4eee2589188a8fa9d3bac60e18f68ab22b19426018a61ac8b86d766cd0acfc9d 00:21:49.080 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:21:49.080 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:49.338 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.4Gb 00:21:49.338 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.4Gb 00:21:49.338 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.4Gb 00:21:49.338 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:21:49.338 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78814 00:21:49.338 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 78814 ']' 00:21:49.338 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.338 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:49.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.338 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.338 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:49.338 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Tv9 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.vbS ]] 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vbS 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Ypk 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.LwB ]] 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.LwB 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ttM 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.28K ]] 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.28K 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.4Iz 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.2X8 ]] 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.2X8 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.4Gb 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:21:49.597 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:21:49.598 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:21:49.598 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:49.598 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:49.598 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:49.598 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:49.598 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:49.598 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:49.598 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:49.598 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:49.598 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:49.598 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:49.598 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:21:49.598 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:21:49.598 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:21:49.598 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:49.598 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:49.598 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:49.598 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:21:49.598 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:21:49.598 05:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:21:49.598 05:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:49.598 05:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:49.856 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:49.856 Waiting for block devices as requested 00:21:49.856 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:50.113 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:50.679 05:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:50.679 05:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:50.679 05:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:21:50.679 05:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:21:50.679 05:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:50.679 05:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:50.679 05:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:21:50.679 05:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:50.679 05:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:50.679 No valid GPT data, bailing 00:21:50.679 05:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:50.679 05:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:50.679 05:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:50.679 05:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:21:50.679 05:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:50.679 05:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:50.679 05:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:21:50.679 05:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:21:50.679 05:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:50.679 05:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:50.679 05:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:21:50.679 05:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:50.679 05:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:50.679 No valid GPT data, bailing 00:21:50.679 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:50.679 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:50.679 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:50.679 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:21:50.679 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:50.679 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:50.679 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:21:50.679 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:21:50.679 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:50.679 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:50.679 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:21:50.679 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:50.679 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:50.679 No valid GPT data, bailing 00:21:50.679 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:50.679 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:50.679 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:50.679 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:21:50.679 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:50.679 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:50.679 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:21:50.679 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:21:50.679 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:50.679 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:50.679 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:21:50.679 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:50.679 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:50.679 No valid GPT data, bailing 00:21:50.679 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:50.679 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:50.679 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:50.680 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:21:50.680 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:21:50.680 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:50.680 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:50.680 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:50.680 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:21:50.680 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:21:50.680 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:21:50.680 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:21:50.680 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:21:50.680 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:21:50.680 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:21:50.680 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:21:50.680 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:50.938 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid=4bd82fc4-6e19-4d22-95c5-23a13095cd93 -a 10.0.0.1 -t tcp -s 4420 00:21:50.938 00:21:50.938 Discovery Log Number of Records 2, Generation counter 2 00:21:50.938 =====Discovery Log Entry 0====== 00:21:50.938 trtype: tcp 00:21:50.938 adrfam: ipv4 00:21:50.938 subtype: current discovery subsystem 00:21:50.938 treq: not specified, sq flow control disable supported 00:21:50.938 portid: 1 00:21:50.938 trsvcid: 4420 00:21:50.938 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:50.938 traddr: 10.0.0.1 00:21:50.938 eflags: none 00:21:50.938 sectype: none 00:21:50.938 =====Discovery Log Entry 1====== 00:21:50.938 trtype: tcp 00:21:50.938 adrfam: ipv4 00:21:50.938 subtype: nvme subsystem 00:21:50.938 treq: not specified, sq flow control disable supported 00:21:50.938 portid: 1 00:21:50.938 trsvcid: 4420 00:21:50.938 subnqn: nqn.2024-02.io.spdk:cnode0 00:21:50.938 traddr: 10.0.0.1 00:21:50.938 eflags: none 00:21:50.938 sectype: none 00:21:50.938 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:50.938 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:21:50.938 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:50.938 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:50.938 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:50.938 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:50.938 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:50.938 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:50.938 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:21:50.938 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:21:50.938 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:50.938 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:50.938 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:21:50.938 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: ]] 00:21:50.938 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:21:50.939 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:21:50.939 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:21:50.939 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:21:50.939 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:50.939 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:21:50.939 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:50.939 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:21:50.939 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:50.939 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:50.939 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:50.939 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:50.939 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.939 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.939 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.939 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:50.939 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:50.939 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:50.939 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:50.939 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:50.939 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:50.939 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:50.939 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:50.939 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:50.939 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:50.939 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:50.939 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.939 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.939 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.198 nvme0n1 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTYwNWM0NWUxOTRjYzhlMjk2ZDkyNzY1OTBhYWIxNjeEUZgh: 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTYwNWM0NWUxOTRjYzhlMjk2ZDkyNzY1OTBhYWIxNjeEUZgh: 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: ]] 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:51.198 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:51.199 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:51.199 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.199 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.199 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.199 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:51.199 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:51.199 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:51.199 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:51.199 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:51.199 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:51.199 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:51.199 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:51.199 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:51.199 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:51.199 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:51.199 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.199 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.199 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.199 nvme0n1 00:21:51.199 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.199 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:51.199 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:51.199 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.199 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.199 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.199 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.199 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:51.199 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.199 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.457 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.457 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:51.457 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:51.457 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:51.457 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:51.457 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:51.457 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:51.457 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:21:51.457 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:21:51.457 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:51.457 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:51.457 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:21:51.457 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: ]] 00:21:51.457 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:21:51.457 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:21:51.457 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:51.457 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:51.457 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:51.457 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:51.457 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:51.457 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:51.457 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.457 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.457 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.457 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:51.457 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:51.457 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:51.457 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.458 nvme0n1 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: ]] 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.458 05:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.717 nvme0n1 00:21:51.717 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.717 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:51.717 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.717 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.717 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDM2NjNhNDQwZjRkNmFmMzVmNzNkY2Y1YTQyZGI3MDYzMTQ1MWNmZTlkYjRkNTJlB3IPGQ==: 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDM2NjNhNDQwZjRkNmFmMzVmNzNkY2Y1YTQyZGI3MDYzMTQ1MWNmZTlkYjRkNTJlB3IPGQ==: 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: ]] 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.718 nvme0n1 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.718 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGVlZTI1ODkxODhhOGZhOWQzYmFjNjBlMThmNjhhYjIyYjE5NDI2MDE4YTYxYWM4Yjg2ZDc2NmNkMGFjZmM5ZL94QMM=: 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGVlZTI1ODkxODhhOGZhOWQzYmFjNjBlMThmNjhhYjIyYjE5NDI2MDE4YTYxYWM4Yjg2ZDc2NmNkMGFjZmM5ZL94QMM=: 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.977 nvme0n1 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTYwNWM0NWUxOTRjYzhlMjk2ZDkyNzY1OTBhYWIxNjeEUZgh: 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:51.977 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTYwNWM0NWUxOTRjYzhlMjk2ZDkyNzY1OTBhYWIxNjeEUZgh: 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: ]] 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.545 nvme0n1 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: ]] 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:52.545 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:52.546 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:52.546 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:52.546 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.546 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.546 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.546 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:52.546 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:52.546 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:52.546 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:52.546 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:52.546 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:52.546 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:52.546 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:52.546 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:52.546 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:52.546 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:52.546 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.546 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.546 05:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.859 nvme0n1 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: ]] 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.859 nvme0n1 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.859 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.120 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.120 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:53.120 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:21:53.120 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:53.120 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:53.120 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:53.120 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:53.120 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDM2NjNhNDQwZjRkNmFmMzVmNzNkY2Y1YTQyZGI3MDYzMTQ1MWNmZTlkYjRkNTJlB3IPGQ==: 00:21:53.120 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: 00:21:53.120 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:53.120 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:53.120 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDM2NjNhNDQwZjRkNmFmMzVmNzNkY2Y1YTQyZGI3MDYzMTQ1MWNmZTlkYjRkNTJlB3IPGQ==: 00:21:53.120 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: ]] 00:21:53.120 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: 00:21:53.120 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:21:53.120 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:53.120 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:53.120 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:53.120 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:53.120 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:53.120 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:53.120 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.120 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.120 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.121 nvme0n1 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGVlZTI1ODkxODhhOGZhOWQzYmFjNjBlMThmNjhhYjIyYjE5NDI2MDE4YTYxYWM4Yjg2ZDc2NmNkMGFjZmM5ZL94QMM=: 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGVlZTI1ODkxODhhOGZhOWQzYmFjNjBlMThmNjhhYjIyYjE5NDI2MDE4YTYxYWM4Yjg2ZDc2NmNkMGFjZmM5ZL94QMM=: 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.121 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.380 nvme0n1 00:21:53.380 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.380 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:53.380 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.380 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:53.380 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.380 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.380 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.380 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:53.380 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.380 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.380 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.380 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:53.380 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:53.380 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:21:53.380 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:53.380 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:53.380 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:53.380 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:53.380 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTYwNWM0NWUxOTRjYzhlMjk2ZDkyNzY1OTBhYWIxNjeEUZgh: 00:21:53.380 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: 00:21:53.380 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:53.380 05:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTYwNWM0NWUxOTRjYzhlMjk2ZDkyNzY1OTBhYWIxNjeEUZgh: 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: ]] 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.317 nvme0n1 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: ]] 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.317 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.576 nvme0n1 00:21:54.576 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.576 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:54.576 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.576 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.576 05:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:54.576 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.576 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.576 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:54.576 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.576 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.576 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.576 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:54.576 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:21:54.576 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:54.576 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:54.576 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:54.576 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:54.576 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:21:54.576 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:21:54.576 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:54.576 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:54.576 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:21:54.576 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: ]] 00:21:54.576 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:21:54.576 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:21:54.576 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:54.576 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:54.576 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:54.577 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:54.577 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:54.577 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:54.577 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.577 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.577 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.577 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:54.577 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:54.577 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:54.577 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:54.577 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:54.577 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:54.577 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:54.577 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:54.577 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:54.577 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:54.577 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:54.577 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.577 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.577 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.833 nvme0n1 00:21:54.833 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.833 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:54.833 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.833 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.833 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:54.833 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDM2NjNhNDQwZjRkNmFmMzVmNzNkY2Y1YTQyZGI3MDYzMTQ1MWNmZTlkYjRkNTJlB3IPGQ==: 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDM2NjNhNDQwZjRkNmFmMzVmNzNkY2Y1YTQyZGI3MDYzMTQ1MWNmZTlkYjRkNTJlB3IPGQ==: 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: ]] 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:54.834 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:55.091 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:55.091 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.091 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.091 nvme0n1 00:21:55.091 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.091 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:55.091 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:55.091 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.091 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.091 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.091 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.091 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:55.091 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.091 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGVlZTI1ODkxODhhOGZhOWQzYmFjNjBlMThmNjhhYjIyYjE5NDI2MDE4YTYxYWM4Yjg2ZDc2NmNkMGFjZmM5ZL94QMM=: 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGVlZTI1ODkxODhhOGZhOWQzYmFjNjBlMThmNjhhYjIyYjE5NDI2MDE4YTYxYWM4Yjg2ZDc2NmNkMGFjZmM5ZL94QMM=: 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.350 nvme0n1 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:55.350 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.608 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.608 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:55.608 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.608 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.608 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.608 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:55.608 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:55.608 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:21:55.608 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:55.608 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:55.608 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:55.608 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:55.608 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTYwNWM0NWUxOTRjYzhlMjk2ZDkyNzY1OTBhYWIxNjeEUZgh: 00:21:55.608 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: 00:21:55.608 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:55.608 05:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:57.511 05:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTYwNWM0NWUxOTRjYzhlMjk2ZDkyNzY1OTBhYWIxNjeEUZgh: 00:21:57.511 05:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: ]] 00:21:57.511 05:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: 00:21:57.511 05:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:21:57.511 05:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:57.511 05:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:57.511 05:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:57.511 05:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:57.511 05:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:57.511 05:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:57.511 05:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.511 05:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.511 05:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.511 05:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:57.511 05:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:57.511 05:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:57.511 05:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:57.511 05:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:57.511 05:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:57.511 05:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:57.511 05:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:57.511 05:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:57.511 05:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:57.511 05:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:57.511 05:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.511 05:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.511 05:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.770 nvme0n1 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: ]] 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.770 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.029 nvme0n1 00:21:58.029 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.029 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:58.029 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.029 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:58.029 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.029 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.341 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.341 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:58.341 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.341 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.341 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.341 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:58.341 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:21:58.341 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:58.341 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:58.341 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:58.341 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: ]] 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.342 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.601 nvme0n1 00:21:58.601 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.601 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:58.601 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:58.601 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.601 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.601 05:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDM2NjNhNDQwZjRkNmFmMzVmNzNkY2Y1YTQyZGI3MDYzMTQ1MWNmZTlkYjRkNTJlB3IPGQ==: 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDM2NjNhNDQwZjRkNmFmMzVmNzNkY2Y1YTQyZGI3MDYzMTQ1MWNmZTlkYjRkNTJlB3IPGQ==: 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: ]] 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.601 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.170 nvme0n1 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGVlZTI1ODkxODhhOGZhOWQzYmFjNjBlMThmNjhhYjIyYjE5NDI2MDE4YTYxYWM4Yjg2ZDc2NmNkMGFjZmM5ZL94QMM=: 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGVlZTI1ODkxODhhOGZhOWQzYmFjNjBlMThmNjhhYjIyYjE5NDI2MDE4YTYxYWM4Yjg2ZDc2NmNkMGFjZmM5ZL94QMM=: 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.170 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.429 nvme0n1 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTYwNWM0NWUxOTRjYzhlMjk2ZDkyNzY1OTBhYWIxNjeEUZgh: 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTYwNWM0NWUxOTRjYzhlMjk2ZDkyNzY1OTBhYWIxNjeEUZgh: 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: ]] 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.429 05:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.364 nvme0n1 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: ]] 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.365 05:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.932 nvme0n1 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: ]] 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.932 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.501 nvme0n1 00:22:01.501 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.501 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:01.501 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:01.501 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.501 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.501 05:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.501 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.501 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:01.501 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.501 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.760 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.760 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:01.760 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:22:01.760 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:01.760 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:01.760 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:01.760 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:01.760 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDM2NjNhNDQwZjRkNmFmMzVmNzNkY2Y1YTQyZGI3MDYzMTQ1MWNmZTlkYjRkNTJlB3IPGQ==: 00:22:01.760 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: 00:22:01.760 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:01.760 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:01.760 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDM2NjNhNDQwZjRkNmFmMzVmNzNkY2Y1YTQyZGI3MDYzMTQ1MWNmZTlkYjRkNTJlB3IPGQ==: 00:22:01.760 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: ]] 00:22:01.760 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: 00:22:01.760 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:22:01.760 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:01.760 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:01.760 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:01.760 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:01.760 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:01.760 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:01.760 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.760 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.760 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.760 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:01.760 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:01.760 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:01.760 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:01.761 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:01.761 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:01.761 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:01.761 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:01.761 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:01.761 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:01.761 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:01.761 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:01.761 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.761 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.328 nvme0n1 00:22:02.328 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.328 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:02.328 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:02.328 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.328 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.328 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.328 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.328 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:02.328 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.328 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.328 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.328 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:02.328 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:22:02.328 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:02.328 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:02.328 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:02.328 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:02.328 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGVlZTI1ODkxODhhOGZhOWQzYmFjNjBlMThmNjhhYjIyYjE5NDI2MDE4YTYxYWM4Yjg2ZDc2NmNkMGFjZmM5ZL94QMM=: 00:22:02.328 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:02.328 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:02.328 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:02.328 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGVlZTI1ODkxODhhOGZhOWQzYmFjNjBlMThmNjhhYjIyYjE5NDI2MDE4YTYxYWM4Yjg2ZDc2NmNkMGFjZmM5ZL94QMM=: 00:22:02.328 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:02.328 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:22:02.328 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:02.328 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:02.329 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:02.329 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:02.329 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:02.329 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:02.329 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.329 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.329 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.329 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:02.329 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:02.329 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:02.329 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:02.329 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:02.329 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:02.329 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:02.329 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:02.329 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:02.329 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:02.329 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:02.329 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:02.329 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.329 05:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.894 nvme0n1 00:22:02.894 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.894 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:02.894 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.894 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.894 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:02.894 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTYwNWM0NWUxOTRjYzhlMjk2ZDkyNzY1OTBhYWIxNjeEUZgh: 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTYwNWM0NWUxOTRjYzhlMjk2ZDkyNzY1OTBhYWIxNjeEUZgh: 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: ]] 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:03.152 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.153 nvme0n1 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: ]] 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.153 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.412 nvme0n1 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: ]] 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.412 nvme0n1 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.412 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDM2NjNhNDQwZjRkNmFmMzVmNzNkY2Y1YTQyZGI3MDYzMTQ1MWNmZTlkYjRkNTJlB3IPGQ==: 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDM2NjNhNDQwZjRkNmFmMzVmNzNkY2Y1YTQyZGI3MDYzMTQ1MWNmZTlkYjRkNTJlB3IPGQ==: 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: ]] 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.671 05:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.671 nvme0n1 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGVlZTI1ODkxODhhOGZhOWQzYmFjNjBlMThmNjhhYjIyYjE5NDI2MDE4YTYxYWM4Yjg2ZDc2NmNkMGFjZmM5ZL94QMM=: 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGVlZTI1ODkxODhhOGZhOWQzYmFjNjBlMThmNjhhYjIyYjE5NDI2MDE4YTYxYWM4Yjg2ZDc2NmNkMGFjZmM5ZL94QMM=: 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.671 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.930 nvme0n1 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTYwNWM0NWUxOTRjYzhlMjk2ZDkyNzY1OTBhYWIxNjeEUZgh: 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTYwNWM0NWUxOTRjYzhlMjk2ZDkyNzY1OTBhYWIxNjeEUZgh: 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: ]] 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.930 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.188 nvme0n1 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: ]] 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.188 nvme0n1 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:04.188 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.189 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.189 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: ]] 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.447 nvme0n1 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.447 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDM2NjNhNDQwZjRkNmFmMzVmNzNkY2Y1YTQyZGI3MDYzMTQ1MWNmZTlkYjRkNTJlB3IPGQ==: 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDM2NjNhNDQwZjRkNmFmMzVmNzNkY2Y1YTQyZGI3MDYzMTQ1MWNmZTlkYjRkNTJlB3IPGQ==: 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: ]] 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.707 05:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.707 nvme0n1 00:22:04.707 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGVlZTI1ODkxODhhOGZhOWQzYmFjNjBlMThmNjhhYjIyYjE5NDI2MDE4YTYxYWM4Yjg2ZDc2NmNkMGFjZmM5ZL94QMM=: 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGVlZTI1ODkxODhhOGZhOWQzYmFjNjBlMThmNjhhYjIyYjE5NDI2MDE4YTYxYWM4Yjg2ZDc2NmNkMGFjZmM5ZL94QMM=: 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.708 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.966 nvme0n1 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTYwNWM0NWUxOTRjYzhlMjk2ZDkyNzY1OTBhYWIxNjeEUZgh: 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTYwNWM0NWUxOTRjYzhlMjk2ZDkyNzY1OTBhYWIxNjeEUZgh: 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: ]] 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:04.966 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:04.967 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:04.967 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:04.967 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:04.967 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:04.967 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:04.967 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:04.967 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:04.967 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.967 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.967 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.225 nvme0n1 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: ]] 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.225 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.484 nvme0n1 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: ]] 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.484 05:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.743 nvme0n1 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDM2NjNhNDQwZjRkNmFmMzVmNzNkY2Y1YTQyZGI3MDYzMTQ1MWNmZTlkYjRkNTJlB3IPGQ==: 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDM2NjNhNDQwZjRkNmFmMzVmNzNkY2Y1YTQyZGI3MDYzMTQ1MWNmZTlkYjRkNTJlB3IPGQ==: 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: ]] 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:05.743 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:05.744 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.744 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.744 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.744 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:05.744 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:05.744 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:05.744 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:05.744 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:05.744 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:05.744 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:05.744 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:05.744 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:05.744 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:05.744 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:05.744 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:05.744 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.744 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.003 nvme0n1 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGVlZTI1ODkxODhhOGZhOWQzYmFjNjBlMThmNjhhYjIyYjE5NDI2MDE4YTYxYWM4Yjg2ZDc2NmNkMGFjZmM5ZL94QMM=: 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGVlZTI1ODkxODhhOGZhOWQzYmFjNjBlMThmNjhhYjIyYjE5NDI2MDE4YTYxYWM4Yjg2ZDc2NmNkMGFjZmM5ZL94QMM=: 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.003 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.262 nvme0n1 00:22:06.262 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.262 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:06.262 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.262 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:06.262 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.262 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.262 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.262 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:06.262 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.262 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.262 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.262 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:06.262 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:06.262 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:22:06.262 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:06.262 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:06.262 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:06.262 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:06.262 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTYwNWM0NWUxOTRjYzhlMjk2ZDkyNzY1OTBhYWIxNjeEUZgh: 00:22:06.262 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: 00:22:06.262 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:06.262 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:06.262 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTYwNWM0NWUxOTRjYzhlMjk2ZDkyNzY1OTBhYWIxNjeEUZgh: 00:22:06.262 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: ]] 00:22:06.262 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: 00:22:06.262 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:22:06.262 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:06.262 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:06.262 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:06.263 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:06.263 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:06.263 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:06.263 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.263 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.263 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.263 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:06.263 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:06.263 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:06.263 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:06.263 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:06.263 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:06.263 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:06.263 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:06.263 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:06.263 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:06.263 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:06.263 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.263 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.263 05:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.830 nvme0n1 00:22:06.830 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.830 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:06.830 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:06.830 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.830 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.830 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.830 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.830 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:06.830 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.830 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.830 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.830 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:06.830 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:22:06.830 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:06.830 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:06.830 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:06.830 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:06.830 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:22:06.830 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:22:06.830 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:06.830 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:06.830 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:22:06.831 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: ]] 00:22:06.831 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:22:06.831 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:22:06.831 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:06.831 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:06.831 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:06.831 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:06.831 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:06.831 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:06.831 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.831 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.831 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.831 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:06.831 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:06.831 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:06.831 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:06.831 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:06.831 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:06.831 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:06.831 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:06.831 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:06.831 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:06.831 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:06.831 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.831 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.831 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.396 nvme0n1 00:22:07.396 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.396 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:07.396 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:07.396 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.396 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.396 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.396 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.396 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:07.396 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.396 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.396 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.396 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:07.396 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:22:07.396 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:07.396 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:07.396 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:07.396 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:07.396 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:22:07.396 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:22:07.396 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:07.396 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:07.396 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:22:07.397 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: ]] 00:22:07.397 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:22:07.397 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:22:07.397 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:07.397 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:07.397 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:07.397 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:07.397 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:07.397 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:07.397 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.397 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.397 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.397 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:07.397 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:07.397 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:07.397 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:07.397 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:07.397 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:07.397 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:07.397 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:07.397 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:07.397 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:07.397 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:07.397 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.397 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.397 05:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.655 nvme0n1 00:22:07.655 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.655 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:07.655 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:07.655 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.655 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.655 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDM2NjNhNDQwZjRkNmFmMzVmNzNkY2Y1YTQyZGI3MDYzMTQ1MWNmZTlkYjRkNTJlB3IPGQ==: 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDM2NjNhNDQwZjRkNmFmMzVmNzNkY2Y1YTQyZGI3MDYzMTQ1MWNmZTlkYjRkNTJlB3IPGQ==: 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: ]] 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.914 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.173 nvme0n1 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGVlZTI1ODkxODhhOGZhOWQzYmFjNjBlMThmNjhhYjIyYjE5NDI2MDE4YTYxYWM4Yjg2ZDc2NmNkMGFjZmM5ZL94QMM=: 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGVlZTI1ODkxODhhOGZhOWQzYmFjNjBlMThmNjhhYjIyYjE5NDI2MDE4YTYxYWM4Yjg2ZDc2NmNkMGFjZmM5ZL94QMM=: 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.173 05:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.740 nvme0n1 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTYwNWM0NWUxOTRjYzhlMjk2ZDkyNzY1OTBhYWIxNjeEUZgh: 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTYwNWM0NWUxOTRjYzhlMjk2ZDkyNzY1OTBhYWIxNjeEUZgh: 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: ]] 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.740 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.345 nvme0n1 00:22:09.345 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.345 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:09.345 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:09.345 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.345 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.345 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.345 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.345 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:09.345 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.345 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.345 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.345 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:09.345 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:22:09.345 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:09.345 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:09.345 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:09.345 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:09.345 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:22:09.345 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:22:09.345 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:09.345 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:09.345 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:22:09.345 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: ]] 00:22:09.345 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:22:09.345 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:22:09.345 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:09.345 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:09.345 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:09.346 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:09.346 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:09.346 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:09.346 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.346 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.346 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.346 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:09.346 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:09.346 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:09.346 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:09.346 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:09.346 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:09.346 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:09.346 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:09.346 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:09.346 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:09.346 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:09.346 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.346 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.346 05:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.283 nvme0n1 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: ]] 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.283 05:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.852 nvme0n1 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDM2NjNhNDQwZjRkNmFmMzVmNzNkY2Y1YTQyZGI3MDYzMTQ1MWNmZTlkYjRkNTJlB3IPGQ==: 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDM2NjNhNDQwZjRkNmFmMzVmNzNkY2Y1YTQyZGI3MDYzMTQ1MWNmZTlkYjRkNTJlB3IPGQ==: 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: ]] 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.852 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.420 nvme0n1 00:22:11.420 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.420 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:11.420 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:11.420 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.420 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.420 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.420 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.420 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:11.420 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.420 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGVlZTI1ODkxODhhOGZhOWQzYmFjNjBlMThmNjhhYjIyYjE5NDI2MDE4YTYxYWM4Yjg2ZDc2NmNkMGFjZmM5ZL94QMM=: 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGVlZTI1ODkxODhhOGZhOWQzYmFjNjBlMThmNjhhYjIyYjE5NDI2MDE4YTYxYWM4Yjg2ZDc2NmNkMGFjZmM5ZL94QMM=: 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.679 05:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.247 nvme0n1 00:22:12.247 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.247 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:12.247 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.247 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.247 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:12.247 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.247 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.247 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:12.247 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.247 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.247 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.247 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:12.247 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:12.247 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:12.247 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:22:12.247 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:12.247 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:12.247 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:12.247 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:12.247 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTYwNWM0NWUxOTRjYzhlMjk2ZDkyNzY1OTBhYWIxNjeEUZgh: 00:22:12.247 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: 00:22:12.247 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:12.247 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:12.248 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTYwNWM0NWUxOTRjYzhlMjk2ZDkyNzY1OTBhYWIxNjeEUZgh: 00:22:12.248 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: ]] 00:22:12.248 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: 00:22:12.248 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:22:12.248 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:12.248 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:12.248 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:12.248 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:12.248 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:12.248 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:12.248 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.248 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.248 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.248 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:12.248 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:12.248 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:12.248 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:12.248 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:12.248 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:12.248 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:12.248 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:12.248 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:12.248 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:12.248 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:12.248 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.248 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.248 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.507 nvme0n1 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: ]] 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.507 nvme0n1 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:12.507 05:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: ]] 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:12.766 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.767 nvme0n1 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDM2NjNhNDQwZjRkNmFmMzVmNzNkY2Y1YTQyZGI3MDYzMTQ1MWNmZTlkYjRkNTJlB3IPGQ==: 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDM2NjNhNDQwZjRkNmFmMzVmNzNkY2Y1YTQyZGI3MDYzMTQ1MWNmZTlkYjRkNTJlB3IPGQ==: 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: ]] 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.767 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.025 nvme0n1 00:22:13.025 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.025 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:13.025 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.025 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:13.025 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.025 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.025 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.025 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:13.025 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.025 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.025 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.025 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:13.025 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:22:13.025 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:13.025 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:13.025 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:13.025 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:13.025 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGVlZTI1ODkxODhhOGZhOWQzYmFjNjBlMThmNjhhYjIyYjE5NDI2MDE4YTYxYWM4Yjg2ZDc2NmNkMGFjZmM5ZL94QMM=: 00:22:13.025 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:13.026 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:13.026 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:13.026 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGVlZTI1ODkxODhhOGZhOWQzYmFjNjBlMThmNjhhYjIyYjE5NDI2MDE4YTYxYWM4Yjg2ZDc2NmNkMGFjZmM5ZL94QMM=: 00:22:13.026 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:13.026 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:22:13.026 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:13.026 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:13.026 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:13.026 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:13.026 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:13.026 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:13.026 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.026 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.026 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.026 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:13.026 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:13.026 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:13.026 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:13.026 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:13.026 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:13.026 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:13.026 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:13.026 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:13.026 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:13.026 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:13.026 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:13.026 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.026 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.285 nvme0n1 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTYwNWM0NWUxOTRjYzhlMjk2ZDkyNzY1OTBhYWIxNjeEUZgh: 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTYwNWM0NWUxOTRjYzhlMjk2ZDkyNzY1OTBhYWIxNjeEUZgh: 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: ]] 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:13.285 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:13.286 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:13.286 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:13.286 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:13.286 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:13.286 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:13.286 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:13.286 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:13.286 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:13.286 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:13.286 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.286 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.286 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.286 nvme0n1 00:22:13.286 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.286 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:13.286 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:13.286 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.286 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.286 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: ]] 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.545 nvme0n1 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.545 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.546 05:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: ]] 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.546 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.805 nvme0n1 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDM2NjNhNDQwZjRkNmFmMzVmNzNkY2Y1YTQyZGI3MDYzMTQ1MWNmZTlkYjRkNTJlB3IPGQ==: 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDM2NjNhNDQwZjRkNmFmMzVmNzNkY2Y1YTQyZGI3MDYzMTQ1MWNmZTlkYjRkNTJlB3IPGQ==: 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: ]] 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.805 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.064 nvme0n1 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGVlZTI1ODkxODhhOGZhOWQzYmFjNjBlMThmNjhhYjIyYjE5NDI2MDE4YTYxYWM4Yjg2ZDc2NmNkMGFjZmM5ZL94QMM=: 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGVlZTI1ODkxODhhOGZhOWQzYmFjNjBlMThmNjhhYjIyYjE5NDI2MDE4YTYxYWM4Yjg2ZDc2NmNkMGFjZmM5ZL94QMM=: 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.064 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.323 nvme0n1 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTYwNWM0NWUxOTRjYzhlMjk2ZDkyNzY1OTBhYWIxNjeEUZgh: 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTYwNWM0NWUxOTRjYzhlMjk2ZDkyNzY1OTBhYWIxNjeEUZgh: 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: ]] 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:14.323 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:14.324 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:14.324 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:14.324 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:14.324 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:14.324 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:14.324 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:14.324 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:14.324 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:14.324 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:14.324 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.324 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.324 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.651 nvme0n1 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: ]] 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.651 05:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.651 nvme0n1 00:22:14.651 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.651 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:14.651 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.651 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.651 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:14.909 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.909 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.909 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:14.909 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.909 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.909 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.909 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:14.909 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:22:14.909 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:14.909 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:14.909 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:14.909 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:14.909 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:22:14.909 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:22:14.909 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:14.909 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:14.909 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: ]] 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.910 nvme0n1 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.910 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDM2NjNhNDQwZjRkNmFmMzVmNzNkY2Y1YTQyZGI3MDYzMTQ1MWNmZTlkYjRkNTJlB3IPGQ==: 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDM2NjNhNDQwZjRkNmFmMzVmNzNkY2Y1YTQyZGI3MDYzMTQ1MWNmZTlkYjRkNTJlB3IPGQ==: 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: ]] 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.168 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.428 nvme0n1 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGVlZTI1ODkxODhhOGZhOWQzYmFjNjBlMThmNjhhYjIyYjE5NDI2MDE4YTYxYWM4Yjg2ZDc2NmNkMGFjZmM5ZL94QMM=: 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGVlZTI1ODkxODhhOGZhOWQzYmFjNjBlMThmNjhhYjIyYjE5NDI2MDE4YTYxYWM4Yjg2ZDc2NmNkMGFjZmM5ZL94QMM=: 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.428 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.687 nvme0n1 00:22:15.687 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.687 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:15.687 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.687 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.687 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:15.688 05:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTYwNWM0NWUxOTRjYzhlMjk2ZDkyNzY1OTBhYWIxNjeEUZgh: 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTYwNWM0NWUxOTRjYzhlMjk2ZDkyNzY1OTBhYWIxNjeEUZgh: 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: ]] 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.688 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.946 nvme0n1 00:22:15.946 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.946 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:15.946 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:15.946 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.946 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.946 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.204 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.204 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:16.204 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.204 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.204 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.204 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:16.204 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:22:16.204 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:16.204 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:16.204 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:16.204 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:16.204 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:22:16.204 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:22:16.205 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:16.205 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:16.205 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:22:16.205 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: ]] 00:22:16.205 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:22:16.205 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:22:16.205 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:16.205 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:16.205 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:16.205 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:16.205 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:16.205 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:16.205 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.205 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.205 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.205 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:16.205 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:16.205 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:16.205 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:16.205 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:16.205 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:16.205 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:16.205 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:16.205 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:16.205 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:16.205 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:16.205 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.205 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.205 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.463 nvme0n1 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: ]] 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.463 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.464 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.464 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:16.464 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:16.464 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:16.464 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:16.464 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:16.464 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:16.464 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:16.464 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:16.464 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:16.464 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:16.464 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:16.464 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:16.464 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.464 05:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.030 nvme0n1 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDM2NjNhNDQwZjRkNmFmMzVmNzNkY2Y1YTQyZGI3MDYzMTQ1MWNmZTlkYjRkNTJlB3IPGQ==: 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDM2NjNhNDQwZjRkNmFmMzVmNzNkY2Y1YTQyZGI3MDYzMTQ1MWNmZTlkYjRkNTJlB3IPGQ==: 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: ]] 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.030 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.288 nvme0n1 00:22:17.288 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.288 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:17.288 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:17.288 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.288 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.288 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.288 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.288 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:17.288 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.288 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGVlZTI1ODkxODhhOGZhOWQzYmFjNjBlMThmNjhhYjIyYjE5NDI2MDE4YTYxYWM4Yjg2ZDc2NmNkMGFjZmM5ZL94QMM=: 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGVlZTI1ODkxODhhOGZhOWQzYmFjNjBlMThmNjhhYjIyYjE5NDI2MDE4YTYxYWM4Yjg2ZDc2NmNkMGFjZmM5ZL94QMM=: 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.547 05:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.806 nvme0n1 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTYwNWM0NWUxOTRjYzhlMjk2ZDkyNzY1OTBhYWIxNjeEUZgh: 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTYwNWM0NWUxOTRjYzhlMjk2ZDkyNzY1OTBhYWIxNjeEUZgh: 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: ]] 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzhlNWI3YThjMDQyOTcyY2NiNWQwOTMzY2M3ZTA5ZGI4MTQ5Mzc3YWIxMzlmYWQ0ZDI3M2ZlZTZjOWQyYWM1Mr8fl60=: 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.806 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.743 nvme0n1 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: ]] 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.743 05:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.309 nvme0n1 00:22:19.309 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.309 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:19.309 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:19.309 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.309 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.309 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.309 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.309 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:19.309 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.309 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.309 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.309 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:19.309 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:22:19.309 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:19.309 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:19.309 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:19.309 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:19.309 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:22:19.309 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:22:19.309 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:19.309 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:19.309 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:22:19.309 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: ]] 00:22:19.309 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:22:19.310 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:22:19.310 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:19.310 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:19.310 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:19.310 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:19.310 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:19.310 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:19.310 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.310 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.310 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.310 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:19.310 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:19.310 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:19.310 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:19.310 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:19.310 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:19.310 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:19.310 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:19.310 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:19.310 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:19.310 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:19.310 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.310 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.310 05:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.948 nvme0n1 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDM2NjNhNDQwZjRkNmFmMzVmNzNkY2Y1YTQyZGI3MDYzMTQ1MWNmZTlkYjRkNTJlB3IPGQ==: 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDM2NjNhNDQwZjRkNmFmMzVmNzNkY2Y1YTQyZGI3MDYzMTQ1MWNmZTlkYjRkNTJlB3IPGQ==: 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: ]] 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjdmNjA2NWQ3ZWJhZTIyN2VlYzYwMTNjNWMzMTI0N2IWnNzW: 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.948 05:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.514 nvme0n1 00:22:20.514 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.514 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.514 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.514 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.514 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGVlZTI1ODkxODhhOGZhOWQzYmFjNjBlMThmNjhhYjIyYjE5NDI2MDE4YTYxYWM4Yjg2ZDc2NmNkMGFjZmM5ZL94QMM=: 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGVlZTI1ODkxODhhOGZhOWQzYmFjNjBlMThmNjhhYjIyYjE5NDI2MDE4YTYxYWM4Yjg2ZDc2NmNkMGFjZmM5ZL94QMM=: 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.772 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.339 nvme0n1 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: ]] 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:21.339 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:21.340 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:21.340 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:21.340 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:21.340 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:21.340 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:21.340 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:22:21.340 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:21.340 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:21.340 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:21.340 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:21.340 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:21.340 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:21.340 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.340 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.340 request: 00:22:21.340 { 00:22:21.340 "name": "nvme0", 00:22:21.340 "trtype": "tcp", 00:22:21.340 "traddr": "10.0.0.1", 00:22:21.340 "adrfam": "ipv4", 00:22:21.340 "trsvcid": "4420", 00:22:21.340 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:21.340 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:21.340 "prchk_reftag": false, 00:22:21.340 "prchk_guard": false, 00:22:21.340 "hdgst": false, 00:22:21.340 "ddgst": false, 00:22:21.340 "allow_unrecognized_csi": false, 00:22:21.340 "method": "bdev_nvme_attach_controller", 00:22:21.340 "req_id": 1 00:22:21.340 } 00:22:21.340 Got JSON-RPC error response 00:22:21.340 response: 00:22:21.340 { 00:22:21.340 "code": -5, 00:22:21.340 "message": "Input/output error" 00:22:21.340 } 00:22:21.340 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:21.340 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:22:21.340 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:21.340 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:21.340 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:21.340 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:22:21.340 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.340 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.340 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.599 request: 00:22:21.599 { 00:22:21.599 "name": "nvme0", 00:22:21.599 "trtype": "tcp", 00:22:21.599 "traddr": "10.0.0.1", 00:22:21.599 "adrfam": "ipv4", 00:22:21.599 "trsvcid": "4420", 00:22:21.599 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:21.599 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:21.599 "prchk_reftag": false, 00:22:21.599 "prchk_guard": false, 00:22:21.599 "hdgst": false, 00:22:21.599 "ddgst": false, 00:22:21.599 "dhchap_key": "key2", 00:22:21.599 "allow_unrecognized_csi": false, 00:22:21.599 "method": "bdev_nvme_attach_controller", 00:22:21.599 "req_id": 1 00:22:21.599 } 00:22:21.599 Got JSON-RPC error response 00:22:21.599 response: 00:22:21.599 { 00:22:21.599 "code": -5, 00:22:21.599 "message": "Input/output error" 00:22:21.599 } 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.599 05:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.599 request: 00:22:21.599 { 00:22:21.599 "name": "nvme0", 00:22:21.599 "trtype": "tcp", 00:22:21.599 "traddr": "10.0.0.1", 00:22:21.599 "adrfam": "ipv4", 00:22:21.599 "trsvcid": "4420", 00:22:21.599 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:21.599 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:21.599 "prchk_reftag": false, 00:22:21.599 "prchk_guard": false, 00:22:21.599 "hdgst": false, 00:22:21.599 "ddgst": false, 00:22:21.599 "dhchap_key": "key1", 00:22:21.599 "dhchap_ctrlr_key": "ckey2", 00:22:21.599 "allow_unrecognized_csi": false, 00:22:21.599 "method": "bdev_nvme_attach_controller", 00:22:21.599 "req_id": 1 00:22:21.599 } 00:22:21.599 Got JSON-RPC error response 00:22:21.599 response: 00:22:21.599 { 00:22:21.599 "code": -5, 00:22:21.599 "message": "Input/output error" 00:22:21.599 } 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:21.599 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:21.600 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:21.600 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:21.600 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:21.600 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.600 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.858 nvme0n1 00:22:21.858 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.858 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:21.858 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:21.858 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:21.858 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:21.858 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:21.858 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: ]] 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.859 request: 00:22:21.859 { 00:22:21.859 "name": "nvme0", 00:22:21.859 "dhchap_key": "key1", 00:22:21.859 "dhchap_ctrlr_key": "ckey2", 00:22:21.859 "method": "bdev_nvme_set_keys", 00:22:21.859 "req_id": 1 00:22:21.859 } 00:22:21.859 Got JSON-RPC error response 00:22:21.859 response: 00:22:21.859 { 00:22:21.859 "code": -13, 00:22:21.859 "message": "Permission denied" 00:22:21.859 } 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:22:21.859 05:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2VmMjllNjA3MzEyYmI3NzU0N2VmNjUxODE5ZDE3ZDQ1ZTUxMjA1MmI2YmQxMTZjB7f4TQ==: 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: ]] 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGI1YWZiM2NlYTJiMmQyMGFiOWMwYTk4NDY1NTFjMTgxNWY5Yjg2YzIwMjRkMzgz0ulGNQ==: 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.234 nvme0n1 00:22:23.234 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA2ZGJjMDllZmQ5NzY0NWRlNmNhOWQ0NDJmOGNmMzNb0QJX: 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: ]] 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzVlZDE4OWI1NTRhNjYzZmNjZmJlY2E3YzczMWY4OWV+CC59: 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.235 request: 00:22:23.235 { 00:22:23.235 "name": "nvme0", 00:22:23.235 "dhchap_key": "key2", 00:22:23.235 "dhchap_ctrlr_key": "ckey1", 00:22:23.235 "method": "bdev_nvme_set_keys", 00:22:23.235 "req_id": 1 00:22:23.235 } 00:22:23.235 Got JSON-RPC error response 00:22:23.235 response: 00:22:23.235 { 00:22:23.235 "code": -13, 00:22:23.235 "message": "Permission denied" 00:22:23.235 } 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:22:23.235 05:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:22:24.170 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.170 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.170 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:22:24.170 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.170 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.170 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:22:24.170 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:22:24.170 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:22:24.170 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:22:24.170 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:24.170 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:22:24.170 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:24.170 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:22:24.170 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:24.170 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:24.170 rmmod nvme_tcp 00:22:24.170 rmmod nvme_fabrics 00:22:24.170 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78814 ']' 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78814 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 78814 ']' 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 78814 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78814 00:22:24.429 killing process with pid 78814 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78814' 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 78814 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 78814 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:24.429 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:24.743 05:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:24.743 05:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:24.743 05:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:24.743 05:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:24.743 05:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:24.743 05:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.743 05:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.743 05:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.743 05:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:22:24.743 05:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:24.743 05:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:24.743 05:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:22:24.743 05:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:22:24.743 05:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:22:24.743 05:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:24.743 05:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:24.743 05:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:24.743 05:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:24.743 05:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:22:24.743 05:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:22:24.743 05:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:25.308 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:25.567 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:25.567 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:25.567 05:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Tv9 /tmp/spdk.key-null.Ypk /tmp/spdk.key-sha256.ttM /tmp/spdk.key-sha384.4Iz /tmp/spdk.key-sha512.4Gb /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:22:25.567 05:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:25.825 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:25.825 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:25.825 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:26.084 00:22:26.084 real 0m38.387s 00:22:26.084 user 0m34.377s 00:22:26.084 sys 0m3.629s 00:22:26.084 05:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:26.084 05:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.084 ************************************ 00:22:26.084 END TEST nvmf_auth_host 00:22:26.084 ************************************ 00:22:26.084 05:32:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:22:26.084 05:32:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:26.084 05:32:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:26.084 05:32:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:26.084 05:32:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.084 ************************************ 00:22:26.084 START TEST nvmf_digest 00:22:26.084 ************************************ 00:22:26.084 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:26.084 * Looking for test storage... 00:22:26.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:26.084 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:26.084 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:22:26.084 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:26.084 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:26.084 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:26.084 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:26.084 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:26.084 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:22:26.084 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:22:26.084 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:22:26.084 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:22:26.084 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:22:26.084 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:22:26.084 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:22:26.084 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:26.084 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:22:26.084 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:22:26.084 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:26.084 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:26.085 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:22:26.085 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:22:26.085 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:26.085 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:22:26.085 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:22:26.085 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:22:26.344 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:22:26.344 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:26.344 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:22:26.344 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:22:26.344 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:26.344 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:26.344 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:26.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.345 --rc genhtml_branch_coverage=1 00:22:26.345 --rc genhtml_function_coverage=1 00:22:26.345 --rc genhtml_legend=1 00:22:26.345 --rc geninfo_all_blocks=1 00:22:26.345 --rc geninfo_unexecuted_blocks=1 00:22:26.345 00:22:26.345 ' 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:26.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.345 --rc genhtml_branch_coverage=1 00:22:26.345 --rc genhtml_function_coverage=1 00:22:26.345 --rc genhtml_legend=1 00:22:26.345 --rc geninfo_all_blocks=1 00:22:26.345 --rc geninfo_unexecuted_blocks=1 00:22:26.345 00:22:26.345 ' 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:26.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.345 --rc genhtml_branch_coverage=1 00:22:26.345 --rc genhtml_function_coverage=1 00:22:26.345 --rc genhtml_legend=1 00:22:26.345 --rc geninfo_all_blocks=1 00:22:26.345 --rc geninfo_unexecuted_blocks=1 00:22:26.345 00:22:26.345 ' 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:26.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.345 --rc genhtml_branch_coverage=1 00:22:26.345 --rc genhtml_function_coverage=1 00:22:26.345 --rc genhtml_legend=1 00:22:26.345 --rc geninfo_all_blocks=1 00:22:26.345 --rc geninfo_unexecuted_blocks=1 00:22:26.345 00:22:26.345 ' 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:26.345 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:26.345 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:26.346 Cannot find device "nvmf_init_br" 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:26.346 Cannot find device "nvmf_init_br2" 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:26.346 Cannot find device "nvmf_tgt_br" 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:26.346 Cannot find device "nvmf_tgt_br2" 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:26.346 Cannot find device "nvmf_init_br" 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:26.346 Cannot find device "nvmf_init_br2" 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:26.346 Cannot find device "nvmf_tgt_br" 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:26.346 Cannot find device "nvmf_tgt_br2" 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:26.346 Cannot find device "nvmf_br" 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:26.346 Cannot find device "nvmf_init_if" 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:26.346 Cannot find device "nvmf_init_if2" 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:26.346 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:26.346 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:26.346 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:26.605 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:26.605 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:26.605 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:26.605 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:26.605 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:26.605 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:26.605 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:26.605 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:26.605 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:26.605 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:26.605 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:26.605 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:26.605 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:26.605 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:26.605 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:26.605 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:26.605 05:32:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:26.605 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:26.605 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:26.605 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:26.605 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:26.605 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:26.605 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:26.605 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:26.605 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:26.605 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:26.605 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:26.605 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:26.605 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:26.605 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:22:26.605 00:22:26.605 --- 10.0.0.3 ping statistics --- 00:22:26.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.605 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:22:26.605 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:26.605 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:26.605 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:22:26.605 00:22:26.605 --- 10.0.0.4 ping statistics --- 00:22:26.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.605 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:22:26.605 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:26.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:22:26.605 00:22:26.605 --- 10.0.0.1 ping statistics --- 00:22:26.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.605 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:22:26.605 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:26.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:22:26.605 00:22:26.605 --- 10.0.0.2 ping statistics --- 00:22:26.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.605 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:22:26.605 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.605 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:22:26.605 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:26.605 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:26.605 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:26.605 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:26.605 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:26.605 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:26.605 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:26.605 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:26.605 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:22:26.605 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:22:26.606 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:26.606 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:26.606 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:22:26.865 ************************************ 00:22:26.865 START TEST nvmf_digest_clean 00:22:26.865 ************************************ 00:22:26.865 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:22:26.865 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:22:26.865 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:22:26.865 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:22:26.865 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:22:26.865 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:22:26.865 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:26.865 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:26.865 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:26.865 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=80470 00:22:26.865 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 80470 00:22:26.865 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:26.865 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 80470 ']' 00:22:26.865 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.865 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:26.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.865 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.865 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:26.865 05:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:26.865 [2024-11-20 05:32:41.198127] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:22:26.865 [2024-11-20 05:32:41.198243] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.865 [2024-11-20 05:32:41.356339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.123 [2024-11-20 05:32:41.395086] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.123 [2024-11-20 05:32:41.395156] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.123 [2024-11-20 05:32:41.395170] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.123 [2024-11-20 05:32:41.395179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.123 [2024-11-20 05:32:41.395188] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.123 [2024-11-20 05:32:41.395585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.058 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:28.058 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:22:28.058 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:28.058 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:28.058 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:28.058 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.058 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:22:28.058 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:22:28.058 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:22:28.058 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.058 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:28.058 [2024-11-20 05:32:42.332952] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:28.058 null0 00:22:28.058 [2024-11-20 05:32:42.368912] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.058 [2024-11-20 05:32:42.393068] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:28.058 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.058 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:22:28.058 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:28.058 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:28.058 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:22:28.058 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:22:28.058 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:22:28.058 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:28.058 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80502 00:22:28.058 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:28.058 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80502 /var/tmp/bperf.sock 00:22:28.058 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 80502 ']' 00:22:28.058 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:28.058 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:28.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:28.058 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:28.058 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:28.058 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:28.058 [2024-11-20 05:32:42.459545] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:22:28.058 [2024-11-20 05:32:42.459675] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80502 ] 00:22:28.323 [2024-11-20 05:32:42.611223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.323 [2024-11-20 05:32:42.652192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.323 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:28.323 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:22:28.323 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:28.323 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:28.323 05:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:28.584 [2024-11-20 05:32:43.033813] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:28.584 05:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:28.584 05:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:29.150 nvme0n1 00:22:29.150 05:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:29.150 05:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:29.150 Running I/O for 2 seconds... 00:22:31.110 14097.00 IOPS, 55.07 MiB/s [2024-11-20T05:32:45.623Z] 14224.00 IOPS, 55.56 MiB/s 00:22:31.110 Latency(us) 00:22:31.110 [2024-11-20T05:32:45.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.110 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:31.110 nvme0n1 : 2.01 14226.91 55.57 0.00 0.00 8989.67 8400.52 18469.24 00:22:31.110 [2024-11-20T05:32:45.623Z] =================================================================================================================== 00:22:31.110 [2024-11-20T05:32:45.623Z] Total : 14226.91 55.57 0.00 0.00 8989.67 8400.52 18469.24 00:22:31.110 { 00:22:31.110 "results": [ 00:22:31.110 { 00:22:31.110 "job": "nvme0n1", 00:22:31.110 "core_mask": "0x2", 00:22:31.110 "workload": "randread", 00:22:31.110 "status": "finished", 00:22:31.110 "queue_depth": 128, 00:22:31.110 "io_size": 4096, 00:22:31.110 "runtime": 2.008588, 00:22:31.110 "iops": 14226.909649963058, 00:22:31.110 "mibps": 55.573865820168194, 00:22:31.110 "io_failed": 0, 00:22:31.110 "io_timeout": 0, 00:22:31.110 "avg_latency_us": 8989.671293902067, 00:22:31.110 "min_latency_us": 8400.523636363636, 00:22:31.110 "max_latency_us": 18469.236363636363 00:22:31.110 } 00:22:31.110 ], 00:22:31.110 "core_count": 1 00:22:31.110 } 00:22:31.368 05:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:31.368 05:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:22:31.368 05:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:31.368 05:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:31.368 05:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:31.368 | select(.opcode=="crc32c") 00:22:31.368 | "\(.module_name) \(.executed)"' 00:22:31.627 05:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:31.627 05:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:31.627 05:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:31.627 05:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:31.627 05:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80502 00:22:31.627 05:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 80502 ']' 00:22:31.627 05:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 80502 00:22:31.627 05:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:22:31.627 05:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:31.627 05:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80502 00:22:31.627 05:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:31.627 05:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:31.627 killing process with pid 80502 00:22:31.627 05:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80502' 00:22:31.627 05:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 80502 00:22:31.627 Received shutdown signal, test time was about 2.000000 seconds 00:22:31.627 00:22:31.627 Latency(us) 00:22:31.627 [2024-11-20T05:32:46.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.627 [2024-11-20T05:32:46.140Z] =================================================================================================================== 00:22:31.627 [2024-11-20T05:32:46.140Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:31.627 05:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 80502 00:22:31.627 05:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:22:31.627 05:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:31.627 05:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:31.627 05:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:22:31.627 05:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:22:31.627 05:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:22:31.627 05:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:31.627 05:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80555 00:22:31.627 05:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80555 /var/tmp/bperf.sock 00:22:31.627 05:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:31.627 05:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 80555 ']' 00:22:31.627 05:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:31.627 05:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:31.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:31.627 05:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:31.627 05:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:31.627 05:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:31.886 [2024-11-20 05:32:46.211836] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:22:31.886 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:31.886 Zero copy mechanism will not be used. 00:22:31.886 [2024-11-20 05:32:46.211957] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80555 ] 00:22:31.886 [2024-11-20 05:32:46.364705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.145 [2024-11-20 05:32:46.403654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.082 05:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:33.082 05:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:22:33.082 05:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:33.082 05:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:33.082 05:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:33.082 [2024-11-20 05:32:47.553816] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:33.082 05:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:33.082 05:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:33.650 nvme0n1 00:22:33.650 05:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:33.650 05:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:33.650 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:33.650 Zero copy mechanism will not be used. 00:22:33.650 Running I/O for 2 seconds... 00:22:35.965 7184.00 IOPS, 898.00 MiB/s [2024-11-20T05:32:50.478Z] 7144.00 IOPS, 893.00 MiB/s 00:22:35.965 Latency(us) 00:22:35.965 [2024-11-20T05:32:50.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.965 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:35.965 nvme0n1 : 2.00 7144.36 893.04 0.00 0.00 2235.85 2055.45 10128.29 00:22:35.965 [2024-11-20T05:32:50.478Z] =================================================================================================================== 00:22:35.965 [2024-11-20T05:32:50.478Z] Total : 7144.36 893.04 0.00 0.00 2235.85 2055.45 10128.29 00:22:35.965 { 00:22:35.965 "results": [ 00:22:35.965 { 00:22:35.965 "job": "nvme0n1", 00:22:35.965 "core_mask": "0x2", 00:22:35.965 "workload": "randread", 00:22:35.965 "status": "finished", 00:22:35.965 "queue_depth": 16, 00:22:35.965 "io_size": 131072, 00:22:35.965 "runtime": 2.002139, 00:22:35.965 "iops": 7144.359107934065, 00:22:35.965 "mibps": 893.0448884917581, 00:22:35.965 "io_failed": 0, 00:22:35.965 "io_timeout": 0, 00:22:35.965 "avg_latency_us": 2235.8536017897086, 00:22:35.965 "min_latency_us": 2055.447272727273, 00:22:35.965 "max_latency_us": 10128.290909090909 00:22:35.965 } 00:22:35.965 ], 00:22:35.965 "core_count": 1 00:22:35.965 } 00:22:35.965 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:35.965 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:22:35.965 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:35.965 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:35.965 | select(.opcode=="crc32c") 00:22:35.965 | "\(.module_name) \(.executed)"' 00:22:35.965 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:35.965 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:35.965 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:35.965 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:35.965 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:35.965 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80555 00:22:35.965 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 80555 ']' 00:22:35.965 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 80555 00:22:35.965 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:22:35.965 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:35.965 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80555 00:22:35.965 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:35.965 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:35.965 killing process with pid 80555 00:22:35.965 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80555' 00:22:35.966 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 80555 00:22:35.966 Received shutdown signal, test time was about 2.000000 seconds 00:22:35.966 00:22:35.966 Latency(us) 00:22:35.966 [2024-11-20T05:32:50.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.966 [2024-11-20T05:32:50.479Z] =================================================================================================================== 00:22:35.966 [2024-11-20T05:32:50.479Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:35.966 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 80555 00:22:36.225 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:22:36.225 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:36.225 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:36.225 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:22:36.225 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:22:36.225 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:22:36.225 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:36.225 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80614 00:22:36.226 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:36.226 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80614 /var/tmp/bperf.sock 00:22:36.226 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 80614 ']' 00:22:36.226 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:36.226 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:36.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:36.226 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:36.226 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:36.226 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:36.226 [2024-11-20 05:32:50.608265] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:22:36.226 [2024-11-20 05:32:50.608360] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80614 ] 00:22:36.484 [2024-11-20 05:32:50.752347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.484 [2024-11-20 05:32:50.787788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.484 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:36.484 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:22:36.484 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:36.484 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:36.484 05:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:36.743 [2024-11-20 05:32:51.208245] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:36.743 05:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:36.743 05:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:37.352 nvme0n1 00:22:37.352 05:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:37.352 05:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:37.352 Running I/O for 2 seconds... 00:22:39.658 15114.00 IOPS, 59.04 MiB/s [2024-11-20T05:32:54.172Z] 15050.00 IOPS, 58.79 MiB/s 00:22:39.659 Latency(us) 00:22:39.659 [2024-11-20T05:32:54.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.659 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:39.659 nvme0n1 : 2.01 15066.55 58.85 0.00 0.00 8487.79 2383.13 17396.83 00:22:39.659 [2024-11-20T05:32:54.172Z] =================================================================================================================== 00:22:39.659 [2024-11-20T05:32:54.172Z] Total : 15066.55 58.85 0.00 0.00 8487.79 2383.13 17396.83 00:22:39.659 { 00:22:39.659 "results": [ 00:22:39.659 { 00:22:39.659 "job": "nvme0n1", 00:22:39.659 "core_mask": "0x2", 00:22:39.659 "workload": "randwrite", 00:22:39.659 "status": "finished", 00:22:39.659 "queue_depth": 128, 00:22:39.659 "io_size": 4096, 00:22:39.659 "runtime": 2.006299, 00:22:39.659 "iops": 15066.547907365752, 00:22:39.659 "mibps": 58.85370276314747, 00:22:39.659 "io_failed": 0, 00:22:39.659 "io_timeout": 0, 00:22:39.659 "avg_latency_us": 8487.788141037208, 00:22:39.659 "min_latency_us": 2383.1272727272726, 00:22:39.659 "max_latency_us": 17396.82909090909 00:22:39.659 } 00:22:39.659 ], 00:22:39.659 "core_count": 1 00:22:39.659 } 00:22:39.659 05:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:39.659 05:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:22:39.659 05:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:39.659 05:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:39.659 05:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:39.659 | select(.opcode=="crc32c") 00:22:39.659 | "\(.module_name) \(.executed)"' 00:22:39.659 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:39.659 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:39.659 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:39.659 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:39.659 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80614 00:22:39.659 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 80614 ']' 00:22:39.659 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 80614 00:22:39.659 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:22:39.917 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:39.917 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80614 00:22:39.917 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:39.917 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:39.917 killing process with pid 80614 00:22:39.917 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80614' 00:22:39.917 Received shutdown signal, test time was about 2.000000 seconds 00:22:39.917 00:22:39.917 Latency(us) 00:22:39.917 [2024-11-20T05:32:54.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.917 [2024-11-20T05:32:54.430Z] =================================================================================================================== 00:22:39.917 [2024-11-20T05:32:54.430Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:39.917 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 80614 00:22:39.917 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 80614 00:22:39.917 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:22:39.917 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:39.917 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:39.917 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:22:39.917 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:22:39.917 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:22:39.917 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:39.917 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80668 00:22:39.917 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:39.917 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80668 /var/tmp/bperf.sock 00:22:39.917 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 80668 ']' 00:22:39.917 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:39.917 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:39.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:39.917 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:39.917 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:39.917 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:39.917 [2024-11-20 05:32:54.394235] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:22:39.917 [2024-11-20 05:32:54.394345] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80668 ] 00:22:39.917 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:39.917 Zero copy mechanism will not be used. 00:22:40.175 [2024-11-20 05:32:54.537240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.175 [2024-11-20 05:32:54.569791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.175 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:40.175 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:22:40.175 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:40.175 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:40.175 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:40.743 [2024-11-20 05:32:54.950757] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:40.743 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:40.743 05:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:41.001 nvme0n1 00:22:41.001 05:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:41.001 05:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:41.001 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:41.001 Zero copy mechanism will not be used. 00:22:41.001 Running I/O for 2 seconds... 00:22:43.340 6323.00 IOPS, 790.38 MiB/s [2024-11-20T05:32:57.853Z] 6315.50 IOPS, 789.44 MiB/s 00:22:43.340 Latency(us) 00:22:43.340 [2024-11-20T05:32:57.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.340 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:43.340 nvme0n1 : 2.00 6313.72 789.22 0.00 0.00 2528.25 1504.35 4289.63 00:22:43.340 [2024-11-20T05:32:57.853Z] =================================================================================================================== 00:22:43.340 [2024-11-20T05:32:57.853Z] Total : 6313.72 789.22 0.00 0.00 2528.25 1504.35 4289.63 00:22:43.340 { 00:22:43.340 "results": [ 00:22:43.340 { 00:22:43.340 "job": "nvme0n1", 00:22:43.340 "core_mask": "0x2", 00:22:43.340 "workload": "randwrite", 00:22:43.340 "status": "finished", 00:22:43.340 "queue_depth": 16, 00:22:43.340 "io_size": 131072, 00:22:43.340 "runtime": 2.004364, 00:22:43.340 "iops": 6313.723455420273, 00:22:43.340 "mibps": 789.2154319275342, 00:22:43.340 "io_failed": 0, 00:22:43.340 "io_timeout": 0, 00:22:43.340 "avg_latency_us": 2528.2458118602062, 00:22:43.340 "min_latency_us": 1504.3490909090908, 00:22:43.340 "max_latency_us": 4289.629090909091 00:22:43.340 } 00:22:43.340 ], 00:22:43.340 "core_count": 1 00:22:43.340 } 00:22:43.340 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:43.340 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:22:43.340 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:43.340 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:43.340 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:43.340 | select(.opcode=="crc32c") 00:22:43.340 | "\(.module_name) \(.executed)"' 00:22:43.340 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:43.340 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:43.340 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:43.340 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:43.340 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80668 00:22:43.340 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 80668 ']' 00:22:43.340 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 80668 00:22:43.340 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:22:43.340 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:43.340 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80668 00:22:43.340 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:43.340 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:43.340 killing process with pid 80668 00:22:43.340 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80668' 00:22:43.340 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 80668 00:22:43.340 Received shutdown signal, test time was about 2.000000 seconds 00:22:43.340 00:22:43.340 Latency(us) 00:22:43.340 [2024-11-20T05:32:57.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.340 [2024-11-20T05:32:57.853Z] =================================================================================================================== 00:22:43.340 [2024-11-20T05:32:57.853Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:43.340 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 80668 00:22:43.598 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80470 00:22:43.598 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 80470 ']' 00:22:43.598 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 80470 00:22:43.598 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:22:43.598 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:43.598 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80470 00:22:43.598 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:43.598 killing process with pid 80470 00:22:43.598 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:43.598 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80470' 00:22:43.598 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 80470 00:22:43.598 05:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 80470 00:22:43.598 00:22:43.598 real 0m16.954s 00:22:43.598 user 0m33.562s 00:22:43.598 sys 0m4.297s 00:22:43.598 05:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:43.598 05:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:43.598 ************************************ 00:22:43.598 END TEST nvmf_digest_clean 00:22:43.598 ************************************ 00:22:43.857 05:32:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:22:43.857 05:32:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:43.857 05:32:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:43.857 05:32:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:22:43.857 ************************************ 00:22:43.857 START TEST nvmf_digest_error 00:22:43.857 ************************************ 00:22:43.857 05:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:22:43.857 05:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:22:43.857 05:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:43.857 05:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:43.857 05:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:43.857 05:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=80745 00:22:43.857 05:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 80745 00:22:43.857 05:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:43.857 05:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 80745 ']' 00:22:43.857 05:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.857 05:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:43.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.857 05:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.857 05:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:43.857 05:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:43.857 [2024-11-20 05:32:58.197395] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:22:43.857 [2024-11-20 05:32:58.197501] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.857 [2024-11-20 05:32:58.350561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.116 [2024-11-20 05:32:58.396486] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.116 [2024-11-20 05:32:58.396572] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.116 [2024-11-20 05:32:58.396595] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.116 [2024-11-20 05:32:58.396611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.116 [2024-11-20 05:32:58.396623] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.116 [2024-11-20 05:32:58.397119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.050 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:45.050 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:22:45.050 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:45.050 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:45.050 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:45.050 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.050 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:22:45.050 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.050 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:45.050 [2024-11-20 05:32:59.261787] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:22:45.050 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.051 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:22:45.051 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:22:45.051 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.051 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:45.051 [2024-11-20 05:32:59.298234] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:45.051 null0 00:22:45.051 [2024-11-20 05:32:59.333943] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.051 [2024-11-20 05:32:59.358067] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:45.051 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.051 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:22:45.051 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:45.051 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:22:45.051 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:22:45.051 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:22:45.051 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80777 00:22:45.051 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:22:45.051 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80777 /var/tmp/bperf.sock 00:22:45.051 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 80777 ']' 00:22:45.051 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:45.051 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:45.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:45.051 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:45.051 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:45.051 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:45.051 [2024-11-20 05:32:59.434544] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:22:45.051 [2024-11-20 05:32:59.434691] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80777 ] 00:22:45.310 [2024-11-20 05:32:59.590576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.310 [2024-11-20 05:32:59.630441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.310 [2024-11-20 05:32:59.663377] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:45.310 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:45.310 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:22:45.310 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:45.310 05:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:45.569 05:33:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:45.569 05:33:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.569 05:33:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:45.569 05:33:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.569 05:33:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:45.569 05:33:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:45.828 nvme0n1 00:22:46.087 05:33:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:46.087 05:33:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.087 05:33:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:46.087 05:33:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.087 05:33:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:46.087 05:33:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:46.087 Running I/O for 2 seconds... 00:22:46.087 [2024-11-20 05:33:00.540727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.087 [2024-11-20 05:33:00.541390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.087 [2024-11-20 05:33:00.541659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.087 [2024-11-20 05:33:00.560254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.087 [2024-11-20 05:33:00.560543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.087 [2024-11-20 05:33:00.560941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.087 [2024-11-20 05:33:00.578863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.087 [2024-11-20 05:33:00.579152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.087 [2024-11-20 05:33:00.579405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.087 [2024-11-20 05:33:00.597364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.087 [2024-11-20 05:33:00.597632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.087 [2024-11-20 05:33:00.597900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.347 [2024-11-20 05:33:00.615963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.347 [2024-11-20 05:33:00.616198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.347 [2024-11-20 05:33:00.616324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.347 [2024-11-20 05:33:00.634326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.347 [2024-11-20 05:33:00.634771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.347 [2024-11-20 05:33:00.634882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.347 [2024-11-20 05:33:00.652756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.347 [2024-11-20 05:33:00.653056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.347 [2024-11-20 05:33:00.653188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.347 [2024-11-20 05:33:00.671588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.347 [2024-11-20 05:33:00.672107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.347 [2024-11-20 05:33:00.672349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.347 [2024-11-20 05:33:00.690921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.347 [2024-11-20 05:33:00.691390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.347 [2024-11-20 05:33:00.691647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.347 [2024-11-20 05:33:00.710154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.347 [2024-11-20 05:33:00.710617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.347 [2024-11-20 05:33:00.710958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.347 [2024-11-20 05:33:00.729743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.347 [2024-11-20 05:33:00.729817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.347 [2024-11-20 05:33:00.729832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.347 [2024-11-20 05:33:00.748185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.347 [2024-11-20 05:33:00.748262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.347 [2024-11-20 05:33:00.748279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.347 [2024-11-20 05:33:00.766727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.347 [2024-11-20 05:33:00.766804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.347 [2024-11-20 05:33:00.766819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.347 [2024-11-20 05:33:00.785163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.347 [2024-11-20 05:33:00.785436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.347 [2024-11-20 05:33:00.785457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.347 [2024-11-20 05:33:00.803715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.347 [2024-11-20 05:33:00.803801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.347 [2024-11-20 05:33:00.803830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.347 [2024-11-20 05:33:00.822329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.347 [2024-11-20 05:33:00.822409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.347 [2024-11-20 05:33:00.822428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.347 [2024-11-20 05:33:00.840967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.347 [2024-11-20 05:33:00.841051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.347 [2024-11-20 05:33:00.841069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.606 [2024-11-20 05:33:00.859497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.606 [2024-11-20 05:33:00.859570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.606 [2024-11-20 05:33:00.859586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.606 [2024-11-20 05:33:00.877786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.606 [2024-11-20 05:33:00.877864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.606 [2024-11-20 05:33:00.877882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.606 [2024-11-20 05:33:00.895706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.606 [2024-11-20 05:33:00.895981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.606 [2024-11-20 05:33:00.896000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.606 [2024-11-20 05:33:00.914057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.606 [2024-11-20 05:33:00.914149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.606 [2024-11-20 05:33:00.914171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.606 [2024-11-20 05:33:00.931872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.606 [2024-11-20 05:33:00.932087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.606 [2024-11-20 05:33:00.932107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.606 [2024-11-20 05:33:00.950025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.606 [2024-11-20 05:33:00.950078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.606 [2024-11-20 05:33:00.950094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.606 [2024-11-20 05:33:00.968072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.606 [2024-11-20 05:33:00.968121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.606 [2024-11-20 05:33:00.968137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.606 [2024-11-20 05:33:00.985995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.606 [2024-11-20 05:33:00.986049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.606 [2024-11-20 05:33:00.986066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.606 [2024-11-20 05:33:01.003715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.606 [2024-11-20 05:33:01.003897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.606 [2024-11-20 05:33:01.003930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.606 [2024-11-20 05:33:01.021665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.606 [2024-11-20 05:33:01.021716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.606 [2024-11-20 05:33:01.021733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.606 [2024-11-20 05:33:01.039628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.606 [2024-11-20 05:33:01.039684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.607 [2024-11-20 05:33:01.039700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.607 [2024-11-20 05:33:01.057531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.607 [2024-11-20 05:33:01.057581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.607 [2024-11-20 05:33:01.057597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.607 [2024-11-20 05:33:01.075582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.607 [2024-11-20 05:33:01.075660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.607 [2024-11-20 05:33:01.075677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.607 [2024-11-20 05:33:01.093580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.607 [2024-11-20 05:33:01.093644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.607 [2024-11-20 05:33:01.093660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.607 [2024-11-20 05:33:01.111414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.607 [2024-11-20 05:33:01.111485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.607 [2024-11-20 05:33:01.111502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.866 [2024-11-20 05:33:01.130030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.866 [2024-11-20 05:33:01.130268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.866 [2024-11-20 05:33:01.130288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.866 [2024-11-20 05:33:01.148670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.866 [2024-11-20 05:33:01.148753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.866 [2024-11-20 05:33:01.148771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.866 [2024-11-20 05:33:01.167100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.866 [2024-11-20 05:33:01.167171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.866 [2024-11-20 05:33:01.167188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.866 [2024-11-20 05:33:01.186407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.866 [2024-11-20 05:33:01.186460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.866 [2024-11-20 05:33:01.186478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.866 [2024-11-20 05:33:01.204988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.866 [2024-11-20 05:33:01.205174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.866 [2024-11-20 05:33:01.205195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.866 [2024-11-20 05:33:01.223216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.866 [2024-11-20 05:33:01.223266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.866 [2024-11-20 05:33:01.223283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.866 [2024-11-20 05:33:01.241513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.866 [2024-11-20 05:33:01.241591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.866 [2024-11-20 05:33:01.241608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.866 [2024-11-20 05:33:01.259595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.866 [2024-11-20 05:33:01.259673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.866 [2024-11-20 05:33:01.259692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.866 [2024-11-20 05:33:01.277896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.866 [2024-11-20 05:33:01.277982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.866 [2024-11-20 05:33:01.277999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.866 [2024-11-20 05:33:01.296753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.866 [2024-11-20 05:33:01.297179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.866 [2024-11-20 05:33:01.297209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.866 [2024-11-20 05:33:01.317117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.866 [2024-11-20 05:33:01.317196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.867 [2024-11-20 05:33:01.317213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.867 [2024-11-20 05:33:01.335355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.867 [2024-11-20 05:33:01.335427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.867 [2024-11-20 05:33:01.335443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.867 [2024-11-20 05:33:01.354752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.867 [2024-11-20 05:33:01.354828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.867 [2024-11-20 05:33:01.354846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.867 [2024-11-20 05:33:01.373520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:46.867 [2024-11-20 05:33:01.373602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.867 [2024-11-20 05:33:01.373619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.126 [2024-11-20 05:33:01.391955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.126 [2024-11-20 05:33:01.392252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.126 [2024-11-20 05:33:01.392273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.126 [2024-11-20 05:33:01.411261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.126 [2024-11-20 05:33:01.411568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.126 [2024-11-20 05:33:01.411589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.126 [2024-11-20 05:33:01.429460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.126 [2024-11-20 05:33:01.429511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.126 [2024-11-20 05:33:01.429529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.126 [2024-11-20 05:33:01.447292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.126 [2024-11-20 05:33:01.447345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.126 [2024-11-20 05:33:01.447361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.126 [2024-11-20 05:33:01.465244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.126 [2024-11-20 05:33:01.465468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.126 [2024-11-20 05:33:01.465490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.126 [2024-11-20 05:33:01.483453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.126 [2024-11-20 05:33:01.483506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.126 [2024-11-20 05:33:01.483522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.126 [2024-11-20 05:33:01.501743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.126 [2024-11-20 05:33:01.501810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.126 [2024-11-20 05:33:01.501827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.126 13663.00 IOPS, 53.37 MiB/s [2024-11-20T05:33:01.639Z] [2024-11-20 05:33:01.519674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.126 [2024-11-20 05:33:01.519871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.126 [2024-11-20 05:33:01.519891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.126 [2024-11-20 05:33:01.538431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.126 [2024-11-20 05:33:01.538659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.126 [2024-11-20 05:33:01.538685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.126 [2024-11-20 05:33:01.556483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.126 [2024-11-20 05:33:01.556536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.126 [2024-11-20 05:33:01.556553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.126 [2024-11-20 05:33:01.574494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.126 [2024-11-20 05:33:01.574546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.126 [2024-11-20 05:33:01.574563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.126 [2024-11-20 05:33:01.592262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.126 [2024-11-20 05:33:01.592442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.126 [2024-11-20 05:33:01.592464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.126 [2024-11-20 05:33:01.610212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.126 [2024-11-20 05:33:01.610259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.126 [2024-11-20 05:33:01.610274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.126 [2024-11-20 05:33:01.628548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.126 [2024-11-20 05:33:01.628598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.126 [2024-11-20 05:33:01.628615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.385 [2024-11-20 05:33:01.646625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.385 [2024-11-20 05:33:01.646682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.385 [2024-11-20 05:33:01.646700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.385 [2024-11-20 05:33:01.664501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.385 [2024-11-20 05:33:01.664547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.385 [2024-11-20 05:33:01.664563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.385 [2024-11-20 05:33:01.682218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.385 [2024-11-20 05:33:01.682388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.385 [2024-11-20 05:33:01.682407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.385 [2024-11-20 05:33:01.708079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.385 [2024-11-20 05:33:01.708142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.385 [2024-11-20 05:33:01.708157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.386 [2024-11-20 05:33:01.726628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.386 [2024-11-20 05:33:01.726695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.386 [2024-11-20 05:33:01.726715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.386 [2024-11-20 05:33:01.744495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.386 [2024-11-20 05:33:01.744546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.386 [2024-11-20 05:33:01.744563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.386 [2024-11-20 05:33:01.762879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.386 [2024-11-20 05:33:01.762959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.386 [2024-11-20 05:33:01.762977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.386 [2024-11-20 05:33:01.781149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.386 [2024-11-20 05:33:01.781205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.386 [2024-11-20 05:33:01.781222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.386 [2024-11-20 05:33:01.799205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.386 [2024-11-20 05:33:01.799277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.386 [2024-11-20 05:33:01.799295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.386 [2024-11-20 05:33:01.817122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.386 [2024-11-20 05:33:01.817196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.386 [2024-11-20 05:33:01.817212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.386 [2024-11-20 05:33:01.835159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.386 [2024-11-20 05:33:01.835227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.386 [2024-11-20 05:33:01.835244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.386 [2024-11-20 05:33:01.853194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.386 [2024-11-20 05:33:01.853264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.386 [2024-11-20 05:33:01.853281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.386 [2024-11-20 05:33:01.871390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.386 [2024-11-20 05:33:01.871465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.386 [2024-11-20 05:33:01.871481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.386 [2024-11-20 05:33:01.889319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.386 [2024-11-20 05:33:01.889387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.386 [2024-11-20 05:33:01.889403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.645 [2024-11-20 05:33:01.907355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.645 [2024-11-20 05:33:01.907424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.645 [2024-11-20 05:33:01.907441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.645 [2024-11-20 05:33:01.925268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.645 [2024-11-20 05:33:01.925335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.645 [2024-11-20 05:33:01.925351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.645 [2024-11-20 05:33:01.943244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.645 [2024-11-20 05:33:01.943322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.645 [2024-11-20 05:33:01.943338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.645 [2024-11-20 05:33:01.961233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.645 [2024-11-20 05:33:01.961302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.645 [2024-11-20 05:33:01.961318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.645 [2024-11-20 05:33:01.979272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.645 [2024-11-20 05:33:01.979344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.645 [2024-11-20 05:33:01.979361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.645 [2024-11-20 05:33:01.997468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.645 [2024-11-20 05:33:01.997547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.645 [2024-11-20 05:33:01.997565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.645 [2024-11-20 05:33:02.015492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.645 [2024-11-20 05:33:02.015567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.645 [2024-11-20 05:33:02.015584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.645 [2024-11-20 05:33:02.033996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.645 [2024-11-20 05:33:02.034045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.645 [2024-11-20 05:33:02.034062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.645 [2024-11-20 05:33:02.052020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.645 [2024-11-20 05:33:02.052210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.645 [2024-11-20 05:33:02.052229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.645 [2024-11-20 05:33:02.071633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.645 [2024-11-20 05:33:02.071815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.645 [2024-11-20 05:33:02.071848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.645 [2024-11-20 05:33:02.089773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.645 [2024-11-20 05:33:02.089819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.645 [2024-11-20 05:33:02.089835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.645 [2024-11-20 05:33:02.107484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.645 [2024-11-20 05:33:02.107527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.645 [2024-11-20 05:33:02.107542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.645 [2024-11-20 05:33:02.125236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.645 [2024-11-20 05:33:02.125402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.645 [2024-11-20 05:33:02.125421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.645 [2024-11-20 05:33:02.143177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.645 [2024-11-20 05:33:02.143223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.645 [2024-11-20 05:33:02.143238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.905 [2024-11-20 05:33:02.161285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.905 [2024-11-20 05:33:02.161330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.905 [2024-11-20 05:33:02.161346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.905 [2024-11-20 05:33:02.179843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.905 [2024-11-20 05:33:02.179896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.905 [2024-11-20 05:33:02.179929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.905 [2024-11-20 05:33:02.198157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.905 [2024-11-20 05:33:02.198223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.905 [2024-11-20 05:33:02.198252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.905 [2024-11-20 05:33:02.217212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.905 [2024-11-20 05:33:02.217259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.905 [2024-11-20 05:33:02.217275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.905 [2024-11-20 05:33:02.235780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.905 [2024-11-20 05:33:02.235876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.905 [2024-11-20 05:33:02.235928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.905 [2024-11-20 05:33:02.254092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.905 [2024-11-20 05:33:02.254140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.905 [2024-11-20 05:33:02.254157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.905 [2024-11-20 05:33:02.272103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.905 [2024-11-20 05:33:02.272284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.905 [2024-11-20 05:33:02.272303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.905 [2024-11-20 05:33:02.290597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.905 [2024-11-20 05:33:02.290649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.905 [2024-11-20 05:33:02.290673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.905 [2024-11-20 05:33:02.308656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.905 [2024-11-20 05:33:02.308711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.905 [2024-11-20 05:33:02.308727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.905 [2024-11-20 05:33:02.328029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.905 [2024-11-20 05:33:02.328088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.905 [2024-11-20 05:33:02.328104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.905 [2024-11-20 05:33:02.348453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.905 [2024-11-20 05:33:02.348501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.905 [2024-11-20 05:33:02.348516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.905 [2024-11-20 05:33:02.366333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.905 [2024-11-20 05:33:02.366376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.905 [2024-11-20 05:33:02.366391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.905 [2024-11-20 05:33:02.384119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.905 [2024-11-20 05:33:02.384161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.905 [2024-11-20 05:33:02.384175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.905 [2024-11-20 05:33:02.401871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:47.905 [2024-11-20 05:33:02.402063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.905 [2024-11-20 05:33:02.402098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.165 [2024-11-20 05:33:02.419995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:48.165 [2024-11-20 05:33:02.420181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.165 [2024-11-20 05:33:02.420319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.165 [2024-11-20 05:33:02.438138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:48.165 [2024-11-20 05:33:02.438325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.165 [2024-11-20 05:33:02.438458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.165 [2024-11-20 05:33:02.456396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:48.165 [2024-11-20 05:33:02.456596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.165 [2024-11-20 05:33:02.456827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.165 [2024-11-20 05:33:02.474757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:48.165 [2024-11-20 05:33:02.474968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.165 [2024-11-20 05:33:02.475108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.165 [2024-11-20 05:33:02.492956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:48.165 [2024-11-20 05:33:02.493148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.165 [2024-11-20 05:33:02.493258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.165 [2024-11-20 05:33:02.511005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c88fc0) 00:22:48.165 [2024-11-20 05:33:02.511180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.165 [2024-11-20 05:33:02.511199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.165 13789.00 IOPS, 53.86 MiB/s 00:22:48.165 Latency(us) 00:22:48.165 [2024-11-20T05:33:02.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.165 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:48.165 nvme0n1 : 2.01 13809.63 53.94 0.00 0.00 9260.58 8340.95 35508.60 00:22:48.165 [2024-11-20T05:33:02.678Z] =================================================================================================================== 00:22:48.165 [2024-11-20T05:33:02.678Z] Total : 13809.63 53.94 0.00 0.00 9260.58 8340.95 35508.60 00:22:48.165 { 00:22:48.165 "results": [ 00:22:48.165 { 00:22:48.165 "job": "nvme0n1", 00:22:48.165 "core_mask": "0x2", 00:22:48.165 "workload": "randread", 00:22:48.165 "status": "finished", 00:22:48.165 "queue_depth": 128, 00:22:48.165 "io_size": 4096, 00:22:48.165 "runtime": 2.006281, 00:22:48.165 "iops": 13809.630854302064, 00:22:48.165 "mibps": 53.94387052461744, 00:22:48.165 "io_failed": 0, 00:22:48.165 "io_timeout": 0, 00:22:48.165 "avg_latency_us": 9260.577461790359, 00:22:48.165 "min_latency_us": 8340.945454545454, 00:22:48.165 "max_latency_us": 35508.59636363637 00:22:48.165 } 00:22:48.165 ], 00:22:48.165 "core_count": 1 00:22:48.165 } 00:22:48.165 05:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:48.165 05:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:48.165 05:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:48.165 05:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:48.165 | .driver_specific 00:22:48.165 | .nvme_error 00:22:48.165 | .status_code 00:22:48.165 | .command_transient_transport_error' 00:22:48.424 05:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 108 > 0 )) 00:22:48.424 05:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80777 00:22:48.424 05:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 80777 ']' 00:22:48.424 05:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 80777 00:22:48.424 05:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:22:48.424 05:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:48.424 05:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80777 00:22:48.424 killing process with pid 80777 00:22:48.424 Received shutdown signal, test time was about 2.000000 seconds 00:22:48.424 00:22:48.424 Latency(us) 00:22:48.424 [2024-11-20T05:33:02.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.424 [2024-11-20T05:33:02.937Z] =================================================================================================================== 00:22:48.424 [2024-11-20T05:33:02.937Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:48.424 05:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:48.424 05:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:48.424 05:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80777' 00:22:48.424 05:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 80777 00:22:48.424 05:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 80777 00:22:48.683 05:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:22:48.683 05:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:48.683 05:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:22:48.683 05:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:22:48.683 05:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:22:48.683 05:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80824 00:22:48.683 05:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:22:48.683 05:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80824 /var/tmp/bperf.sock 00:22:48.683 05:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 80824 ']' 00:22:48.683 05:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:48.683 05:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:48.683 05:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:48.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:48.683 05:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:48.683 05:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:48.683 [2024-11-20 05:33:03.139380] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:22:48.683 [2024-11-20 05:33:03.139747] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:22:48.683 Zero copy mechanism will not be used. 00:22:48.683 llocations --file-prefix=spdk_pid80824 ] 00:22:48.942 [2024-11-20 05:33:03.287086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.942 [2024-11-20 05:33:03.323898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.942 [2024-11-20 05:33:03.358705] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:48.942 05:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:48.942 05:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:22:48.942 05:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:48.942 05:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:49.508 05:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:49.508 05:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.508 05:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:49.508 05:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.508 05:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:49.508 05:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:49.767 nvme0n1 00:22:49.767 05:33:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:49.767 05:33:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.767 05:33:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:49.767 05:33:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.767 05:33:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:49.767 05:33:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:50.027 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:50.027 Zero copy mechanism will not be used. 00:22:50.027 Running I/O for 2 seconds... 00:22:50.027 [2024-11-20 05:33:04.361079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.027 [2024-11-20 05:33:04.361304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.027 [2024-11-20 05:33:04.361447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.027 [2024-11-20 05:33:04.366180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.027 [2024-11-20 05:33:04.366389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.028 [2024-11-20 05:33:04.366592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.028 [2024-11-20 05:33:04.371197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.028 [2024-11-20 05:33:04.371380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.028 [2024-11-20 05:33:04.371560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.028 [2024-11-20 05:33:04.376150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.028 [2024-11-20 05:33:04.376326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.028 [2024-11-20 05:33:04.376514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.028 [2024-11-20 05:33:04.381087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.028 [2024-11-20 05:33:04.381131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.028 [2024-11-20 05:33:04.381146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.028 [2024-11-20 05:33:04.385559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.028 [2024-11-20 05:33:04.385602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.028 [2024-11-20 05:33:04.385618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.028 [2024-11-20 05:33:04.390065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.028 [2024-11-20 05:33:04.390107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.028 [2024-11-20 05:33:04.390121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.028 [2024-11-20 05:33:04.394534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.028 [2024-11-20 05:33:04.394576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.028 [2024-11-20 05:33:04.394590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.028 [2024-11-20 05:33:04.399026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.028 [2024-11-20 05:33:04.399067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.028 [2024-11-20 05:33:04.399081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.028 [2024-11-20 05:33:04.403672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.028 [2024-11-20 05:33:04.403717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.028 [2024-11-20 05:33:04.403732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.028 [2024-11-20 05:33:04.408236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.028 [2024-11-20 05:33:04.408279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.028 [2024-11-20 05:33:04.408293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.028 [2024-11-20 05:33:04.412816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.028 [2024-11-20 05:33:04.412860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.028 [2024-11-20 05:33:04.412874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.028 [2024-11-20 05:33:04.417282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.028 [2024-11-20 05:33:04.417326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.028 [2024-11-20 05:33:04.417341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.028 [2024-11-20 05:33:04.421834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.028 [2024-11-20 05:33:04.421889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.028 [2024-11-20 05:33:04.421923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.028 [2024-11-20 05:33:04.426502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.028 [2024-11-20 05:33:04.426548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.028 [2024-11-20 05:33:04.426563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.028 [2024-11-20 05:33:04.431192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.028 [2024-11-20 05:33:04.431232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.028 [2024-11-20 05:33:04.431279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.028 [2024-11-20 05:33:04.435933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.028 [2024-11-20 05:33:04.435975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.028 [2024-11-20 05:33:04.435989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.028 [2024-11-20 05:33:04.440475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.028 [2024-11-20 05:33:04.440518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.028 [2024-11-20 05:33:04.440532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.028 [2024-11-20 05:33:04.445065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.028 [2024-11-20 05:33:04.445111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.028 [2024-11-20 05:33:04.445125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.028 [2024-11-20 05:33:04.449544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.028 [2024-11-20 05:33:04.449586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.028 [2024-11-20 05:33:04.449601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.028 [2024-11-20 05:33:04.454146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.028 [2024-11-20 05:33:04.454331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.028 [2024-11-20 05:33:04.454520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.028 [2024-11-20 05:33:04.459048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.028 [2024-11-20 05:33:04.459225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.028 [2024-11-20 05:33:04.459374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.028 [2024-11-20 05:33:04.463874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.028 [2024-11-20 05:33:04.463930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.028 [2024-11-20 05:33:04.463946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.028 [2024-11-20 05:33:04.468379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.028 [2024-11-20 05:33:04.468446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.028 [2024-11-20 05:33:04.468463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.029 [2024-11-20 05:33:04.472988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.029 [2024-11-20 05:33:04.473040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.029 [2024-11-20 05:33:04.473056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.029 [2024-11-20 05:33:04.477460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.029 [2024-11-20 05:33:04.477511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.029 [2024-11-20 05:33:04.477527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.029 [2024-11-20 05:33:04.481982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.029 [2024-11-20 05:33:04.482023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.029 [2024-11-20 05:33:04.482037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.029 [2024-11-20 05:33:04.486484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.029 [2024-11-20 05:33:04.486534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.029 [2024-11-20 05:33:04.486549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.029 [2024-11-20 05:33:04.491063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.029 [2024-11-20 05:33:04.491118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.029 [2024-11-20 05:33:04.491132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.029 [2024-11-20 05:33:04.495607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.029 [2024-11-20 05:33:04.495649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.029 [2024-11-20 05:33:04.495663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.029 [2024-11-20 05:33:04.500179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.029 [2024-11-20 05:33:04.500232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.029 [2024-11-20 05:33:04.500248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.029 [2024-11-20 05:33:04.504673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.029 [2024-11-20 05:33:04.504726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.029 [2024-11-20 05:33:04.504741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.029 [2024-11-20 05:33:04.509186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.029 [2024-11-20 05:33:04.509467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.029 [2024-11-20 05:33:04.509486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.029 [2024-11-20 05:33:04.513974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.029 [2024-11-20 05:33:04.514031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.029 [2024-11-20 05:33:04.514048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.029 [2024-11-20 05:33:04.518536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.029 [2024-11-20 05:33:04.518600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.029 [2024-11-20 05:33:04.518616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.029 [2024-11-20 05:33:04.523107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.029 [2024-11-20 05:33:04.523169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.029 [2024-11-20 05:33:04.523183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.029 [2024-11-20 05:33:04.527647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.029 [2024-11-20 05:33:04.527689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.029 [2024-11-20 05:33:04.527703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.029 [2024-11-20 05:33:04.532148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.029 [2024-11-20 05:33:04.532194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.029 [2024-11-20 05:33:04.532209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.029 [2024-11-20 05:33:04.536673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.029 [2024-11-20 05:33:04.536727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.029 [2024-11-20 05:33:04.536742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.290 [2024-11-20 05:33:04.541246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.290 [2024-11-20 05:33:04.541312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.290 [2024-11-20 05:33:04.541327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.290 [2024-11-20 05:33:04.545830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.290 [2024-11-20 05:33:04.545872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.290 [2024-11-20 05:33:04.545887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.290 [2024-11-20 05:33:04.550428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.290 [2024-11-20 05:33:04.550472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.290 [2024-11-20 05:33:04.550486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.290 [2024-11-20 05:33:04.554955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.290 [2024-11-20 05:33:04.554995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.290 [2024-11-20 05:33:04.555008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.290 [2024-11-20 05:33:04.559496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.290 [2024-11-20 05:33:04.559537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.290 [2024-11-20 05:33:04.559567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.290 [2024-11-20 05:33:04.563996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.290 [2024-11-20 05:33:04.564039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.290 [2024-11-20 05:33:04.564053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.290 [2024-11-20 05:33:04.568487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.290 [2024-11-20 05:33:04.568528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.290 [2024-11-20 05:33:04.568542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.290 [2024-11-20 05:33:04.572985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.290 [2024-11-20 05:33:04.573025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.290 [2024-11-20 05:33:04.573039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.290 [2024-11-20 05:33:04.577450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.290 [2024-11-20 05:33:04.577497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.290 [2024-11-20 05:33:04.577513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.290 [2024-11-20 05:33:04.581962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.290 [2024-11-20 05:33:04.582003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.290 [2024-11-20 05:33:04.582022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.290 [2024-11-20 05:33:04.586317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.290 [2024-11-20 05:33:04.586367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.290 [2024-11-20 05:33:04.586381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.290 [2024-11-20 05:33:04.590844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.290 [2024-11-20 05:33:04.590887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.290 [2024-11-20 05:33:04.590917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.290 [2024-11-20 05:33:04.595442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.290 [2024-11-20 05:33:04.595487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.290 [2024-11-20 05:33:04.595502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.290 [2024-11-20 05:33:04.599869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.290 [2024-11-20 05:33:04.599923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.290 [2024-11-20 05:33:04.599939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.290 [2024-11-20 05:33:04.604333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.290 [2024-11-20 05:33:04.604376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.290 [2024-11-20 05:33:04.604390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.290 [2024-11-20 05:33:04.608735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.290 [2024-11-20 05:33:04.608779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.290 [2024-11-20 05:33:04.608794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.290 [2024-11-20 05:33:04.613199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.290 [2024-11-20 05:33:04.613364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-11-20 05:33:04.613387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.291 [2024-11-20 05:33:04.617889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.291 [2024-11-20 05:33:04.617946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-11-20 05:33:04.617962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.291 [2024-11-20 05:33:04.622418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.291 [2024-11-20 05:33:04.622458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-11-20 05:33:04.622489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.291 [2024-11-20 05:33:04.626875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.291 [2024-11-20 05:33:04.626931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-11-20 05:33:04.626947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.291 [2024-11-20 05:33:04.631461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.291 [2024-11-20 05:33:04.631502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-11-20 05:33:04.631516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.291 [2024-11-20 05:33:04.635992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.291 [2024-11-20 05:33:04.636033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-11-20 05:33:04.636047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.291 [2024-11-20 05:33:04.640577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.291 [2024-11-20 05:33:04.640624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-11-20 05:33:04.640639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.291 [2024-11-20 05:33:04.645077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.291 [2024-11-20 05:33:04.645117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-11-20 05:33:04.645131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.291 [2024-11-20 05:33:04.649616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.291 [2024-11-20 05:33:04.649679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-11-20 05:33:04.649694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.291 [2024-11-20 05:33:04.654202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.291 [2024-11-20 05:33:04.654260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-11-20 05:33:04.654291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.291 [2024-11-20 05:33:04.658757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.291 [2024-11-20 05:33:04.658804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-11-20 05:33:04.658818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.291 [2024-11-20 05:33:04.663246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.291 [2024-11-20 05:33:04.663410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-11-20 05:33:04.663428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.291 [2024-11-20 05:33:04.667844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.291 [2024-11-20 05:33:04.667891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-11-20 05:33:04.667919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.291 [2024-11-20 05:33:04.672371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.291 [2024-11-20 05:33:04.672414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-11-20 05:33:04.672428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.291 [2024-11-20 05:33:04.676884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.291 [2024-11-20 05:33:04.676940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-11-20 05:33:04.676955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.291 [2024-11-20 05:33:04.681335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.291 [2024-11-20 05:33:04.681377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-11-20 05:33:04.681391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.291 [2024-11-20 05:33:04.685767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.291 [2024-11-20 05:33:04.685810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-11-20 05:33:04.685824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.291 [2024-11-20 05:33:04.690235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.291 [2024-11-20 05:33:04.690395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-11-20 05:33:04.690414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.291 [2024-11-20 05:33:04.694879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.291 [2024-11-20 05:33:04.694936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-11-20 05:33:04.694951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.291 [2024-11-20 05:33:04.699392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.291 [2024-11-20 05:33:04.699434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-11-20 05:33:04.699448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.291 [2024-11-20 05:33:04.703894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.291 [2024-11-20 05:33:04.703944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-11-20 05:33:04.703958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.291 [2024-11-20 05:33:04.708395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.291 [2024-11-20 05:33:04.708437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-11-20 05:33:04.708452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.291 [2024-11-20 05:33:04.712888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.291 [2024-11-20 05:33:04.712944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-11-20 05:33:04.712959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.291 [2024-11-20 05:33:04.717435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.291 [2024-11-20 05:33:04.717606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-11-20 05:33:04.717624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.291 [2024-11-20 05:33:04.722066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.291 [2024-11-20 05:33:04.722108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-11-20 05:33:04.722122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.291 [2024-11-20 05:33:04.726502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.291 [2024-11-20 05:33:04.726543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-11-20 05:33:04.726558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.291 [2024-11-20 05:33:04.730971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.291 [2024-11-20 05:33:04.731013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-11-20 05:33:04.731027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.291 [2024-11-20 05:33:04.735387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.292 [2024-11-20 05:33:04.735428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.292 [2024-11-20 05:33:04.735441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.292 [2024-11-20 05:33:04.739854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.292 [2024-11-20 05:33:04.739895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.292 [2024-11-20 05:33:04.739927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.292 [2024-11-20 05:33:04.744368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.292 [2024-11-20 05:33:04.744525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.292 [2024-11-20 05:33:04.744544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.292 [2024-11-20 05:33:04.748958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.292 [2024-11-20 05:33:04.748999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.292 [2024-11-20 05:33:04.749014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.292 [2024-11-20 05:33:04.753386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.292 [2024-11-20 05:33:04.753428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.292 [2024-11-20 05:33:04.753442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.292 [2024-11-20 05:33:04.757799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.292 [2024-11-20 05:33:04.757841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.292 [2024-11-20 05:33:04.757855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.292 [2024-11-20 05:33:04.762288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.292 [2024-11-20 05:33:04.762331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.292 [2024-11-20 05:33:04.762345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.292 [2024-11-20 05:33:04.766747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.292 [2024-11-20 05:33:04.766788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.292 [2024-11-20 05:33:04.766802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.292 [2024-11-20 05:33:04.771210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.292 [2024-11-20 05:33:04.771251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.292 [2024-11-20 05:33:04.771265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.292 [2024-11-20 05:33:04.775662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.292 [2024-11-20 05:33:04.775703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.292 [2024-11-20 05:33:04.775718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.292 [2024-11-20 05:33:04.780162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.292 [2024-11-20 05:33:04.780204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.292 [2024-11-20 05:33:04.780218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.292 [2024-11-20 05:33:04.784526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.292 [2024-11-20 05:33:04.784567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.292 [2024-11-20 05:33:04.784581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.292 [2024-11-20 05:33:04.788991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.292 [2024-11-20 05:33:04.789034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.292 [2024-11-20 05:33:04.789049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.292 [2024-11-20 05:33:04.793469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.292 [2024-11-20 05:33:04.793510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.292 [2024-11-20 05:33:04.793524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.292 [2024-11-20 05:33:04.797966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.292 [2024-11-20 05:33:04.798006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.292 [2024-11-20 05:33:04.798020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.552 [2024-11-20 05:33:04.802443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.552 [2024-11-20 05:33:04.802485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.552 [2024-11-20 05:33:04.802499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.552 [2024-11-20 05:33:04.806813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.552 [2024-11-20 05:33:04.806854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.552 [2024-11-20 05:33:04.806869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.552 [2024-11-20 05:33:04.811259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.552 [2024-11-20 05:33:04.811300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.552 [2024-11-20 05:33:04.811314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.552 [2024-11-20 05:33:04.815755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.552 [2024-11-20 05:33:04.815797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.552 [2024-11-20 05:33:04.815811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.552 [2024-11-20 05:33:04.820238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.552 [2024-11-20 05:33:04.820279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.552 [2024-11-20 05:33:04.820293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.552 [2024-11-20 05:33:04.824658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.552 [2024-11-20 05:33:04.824700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.552 [2024-11-20 05:33:04.824714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.552 [2024-11-20 05:33:04.829099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.552 [2024-11-20 05:33:04.829140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.552 [2024-11-20 05:33:04.829154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.552 [2024-11-20 05:33:04.833633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.552 [2024-11-20 05:33:04.833675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.552 [2024-11-20 05:33:04.833689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.552 [2024-11-20 05:33:04.838138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.552 [2024-11-20 05:33:04.838182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.552 [2024-11-20 05:33:04.838196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.552 [2024-11-20 05:33:04.842598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.552 [2024-11-20 05:33:04.842640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.552 [2024-11-20 05:33:04.842654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.552 [2024-11-20 05:33:04.847080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.552 [2024-11-20 05:33:04.847121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.552 [2024-11-20 05:33:04.847135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.552 [2024-11-20 05:33:04.851569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.552 [2024-11-20 05:33:04.851610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.552 [2024-11-20 05:33:04.851624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.552 [2024-11-20 05:33:04.856080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.552 [2024-11-20 05:33:04.856122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.552 [2024-11-20 05:33:04.856136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.552 [2024-11-20 05:33:04.860649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.552 [2024-11-20 05:33:04.860691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.552 [2024-11-20 05:33:04.860706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.552 [2024-11-20 05:33:04.865158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.552 [2024-11-20 05:33:04.865200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.552 [2024-11-20 05:33:04.865215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.552 [2024-11-20 05:33:04.869674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.552 [2024-11-20 05:33:04.869712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.552 [2024-11-20 05:33:04.869725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.552 [2024-11-20 05:33:04.874158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.552 [2024-11-20 05:33:04.874194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.552 [2024-11-20 05:33:04.874207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.552 [2024-11-20 05:33:04.878602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.552 [2024-11-20 05:33:04.878639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.553 [2024-11-20 05:33:04.878651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.553 [2024-11-20 05:33:04.883169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.553 [2024-11-20 05:33:04.883205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.553 [2024-11-20 05:33:04.883218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.553 [2024-11-20 05:33:04.887611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.553 [2024-11-20 05:33:04.887647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.553 [2024-11-20 05:33:04.887660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.553 [2024-11-20 05:33:04.892002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.553 [2024-11-20 05:33:04.892036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.553 [2024-11-20 05:33:04.892049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.553 [2024-11-20 05:33:04.896480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.553 [2024-11-20 05:33:04.896516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.553 [2024-11-20 05:33:04.896529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.553 [2024-11-20 05:33:04.900948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.553 [2024-11-20 05:33:04.900983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.553 [2024-11-20 05:33:04.900995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.553 [2024-11-20 05:33:04.905383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.553 [2024-11-20 05:33:04.905419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.553 [2024-11-20 05:33:04.905432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.553 [2024-11-20 05:33:04.909876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.553 [2024-11-20 05:33:04.909933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.553 [2024-11-20 05:33:04.909947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.553 [2024-11-20 05:33:04.914373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.553 [2024-11-20 05:33:04.914419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.553 [2024-11-20 05:33:04.914432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.553 [2024-11-20 05:33:04.918870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.553 [2024-11-20 05:33:04.918917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.553 [2024-11-20 05:33:04.918940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.553 [2024-11-20 05:33:04.923396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.553 [2024-11-20 05:33:04.923432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.553 [2024-11-20 05:33:04.923445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.553 [2024-11-20 05:33:04.927807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.553 [2024-11-20 05:33:04.927851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.553 [2024-11-20 05:33:04.927864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.553 [2024-11-20 05:33:04.932220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.553 [2024-11-20 05:33:04.932256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.553 [2024-11-20 05:33:04.932268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.553 [2024-11-20 05:33:04.936707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.553 [2024-11-20 05:33:04.936743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.553 [2024-11-20 05:33:04.936756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.553 [2024-11-20 05:33:04.941206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.553 [2024-11-20 05:33:04.941243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.553 [2024-11-20 05:33:04.941256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.553 [2024-11-20 05:33:04.945768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.553 [2024-11-20 05:33:04.945807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.553 [2024-11-20 05:33:04.945820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.553 [2024-11-20 05:33:04.950298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.553 [2024-11-20 05:33:04.950335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.553 [2024-11-20 05:33:04.950348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.553 [2024-11-20 05:33:04.954734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.553 [2024-11-20 05:33:04.954770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.553 [2024-11-20 05:33:04.954783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.553 [2024-11-20 05:33:04.959181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.553 [2024-11-20 05:33:04.959216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.553 [2024-11-20 05:33:04.959229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.553 [2024-11-20 05:33:04.963711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.553 [2024-11-20 05:33:04.963747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.553 [2024-11-20 05:33:04.963760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.553 [2024-11-20 05:33:04.968264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.553 [2024-11-20 05:33:04.968300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.553 [2024-11-20 05:33:04.968313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.553 [2024-11-20 05:33:04.972888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.553 [2024-11-20 05:33:04.972939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.553 [2024-11-20 05:33:04.972953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.553 [2024-11-20 05:33:04.977336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.553 [2024-11-20 05:33:04.977372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.553 [2024-11-20 05:33:04.977385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.553 [2024-11-20 05:33:04.981809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.553 [2024-11-20 05:33:04.981847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.553 [2024-11-20 05:33:04.981859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.553 [2024-11-20 05:33:04.986294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.553 [2024-11-20 05:33:04.986331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.553 [2024-11-20 05:33:04.986344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.553 [2024-11-20 05:33:04.990805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.553 [2024-11-20 05:33:04.990841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.553 [2024-11-20 05:33:04.990854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.553 [2024-11-20 05:33:04.995357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.553 [2024-11-20 05:33:04.995393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.553 [2024-11-20 05:33:04.995406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.553 [2024-11-20 05:33:04.999879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.554 [2024-11-20 05:33:04.999930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.554 [2024-11-20 05:33:04.999944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.554 [2024-11-20 05:33:05.004399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.554 [2024-11-20 05:33:05.004436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.554 [2024-11-20 05:33:05.004449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.554 [2024-11-20 05:33:05.008840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.554 [2024-11-20 05:33:05.008877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.554 [2024-11-20 05:33:05.008889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.554 [2024-11-20 05:33:05.013534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.554 [2024-11-20 05:33:05.013572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.554 [2024-11-20 05:33:05.013585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.554 [2024-11-20 05:33:05.018012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.554 [2024-11-20 05:33:05.018048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.554 [2024-11-20 05:33:05.018061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.554 [2024-11-20 05:33:05.022462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.554 [2024-11-20 05:33:05.022498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.554 [2024-11-20 05:33:05.022518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.554 [2024-11-20 05:33:05.026944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.554 [2024-11-20 05:33:05.026980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.554 [2024-11-20 05:33:05.026993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.554 [2024-11-20 05:33:05.031429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.554 [2024-11-20 05:33:05.031471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.554 [2024-11-20 05:33:05.031484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.554 [2024-11-20 05:33:05.035945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.554 [2024-11-20 05:33:05.035982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.554 [2024-11-20 05:33:05.035995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.554 [2024-11-20 05:33:05.040380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.554 [2024-11-20 05:33:05.040418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.554 [2024-11-20 05:33:05.040431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.554 [2024-11-20 05:33:05.044880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.554 [2024-11-20 05:33:05.044928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.554 [2024-11-20 05:33:05.044942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.554 [2024-11-20 05:33:05.049392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.554 [2024-11-20 05:33:05.049431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.554 [2024-11-20 05:33:05.049444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.554 [2024-11-20 05:33:05.054151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.554 [2024-11-20 05:33:05.054190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.554 [2024-11-20 05:33:05.054204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.554 [2024-11-20 05:33:05.058733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.554 [2024-11-20 05:33:05.058774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.554 [2024-11-20 05:33:05.058787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.814 [2024-11-20 05:33:05.063530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.814 [2024-11-20 05:33:05.063582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.814 [2024-11-20 05:33:05.063597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.814 [2024-11-20 05:33:05.068668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.814 [2024-11-20 05:33:05.068708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.814 [2024-11-20 05:33:05.068722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.814 [2024-11-20 05:33:05.073480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.814 [2024-11-20 05:33:05.073520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.814 [2024-11-20 05:33:05.073534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.814 [2024-11-20 05:33:05.078646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.814 [2024-11-20 05:33:05.078684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.814 [2024-11-20 05:33:05.078697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.814 [2024-11-20 05:33:05.084139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.814 [2024-11-20 05:33:05.084178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.814 [2024-11-20 05:33:05.084191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.814 [2024-11-20 05:33:05.088639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.814 [2024-11-20 05:33:05.088676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.814 [2024-11-20 05:33:05.088688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.814 [2024-11-20 05:33:05.093829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.814 [2024-11-20 05:33:05.093869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.814 [2024-11-20 05:33:05.093888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.814 [2024-11-20 05:33:05.099267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.814 [2024-11-20 05:33:05.099305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.814 [2024-11-20 05:33:05.099319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.814 [2024-11-20 05:33:05.104563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.814 [2024-11-20 05:33:05.104602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.814 [2024-11-20 05:33:05.104615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.814 [2024-11-20 05:33:05.109087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.814 [2024-11-20 05:33:05.109124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.814 [2024-11-20 05:33:05.109137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.814 [2024-11-20 05:33:05.113532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.814 [2024-11-20 05:33:05.113568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.814 [2024-11-20 05:33:05.113581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.814 [2024-11-20 05:33:05.118000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.814 [2024-11-20 05:33:05.118035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-20 05:33:05.118048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.815 [2024-11-20 05:33:05.122646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.815 [2024-11-20 05:33:05.122684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-20 05:33:05.122698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.815 [2024-11-20 05:33:05.127131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.815 [2024-11-20 05:33:05.127167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-20 05:33:05.127180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.815 [2024-11-20 05:33:05.131573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.815 [2024-11-20 05:33:05.131608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-20 05:33:05.131621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.815 [2024-11-20 05:33:05.136047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.815 [2024-11-20 05:33:05.136093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-20 05:33:05.136114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.815 [2024-11-20 05:33:05.140594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.815 [2024-11-20 05:33:05.140631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-20 05:33:05.140644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.815 [2024-11-20 05:33:05.145086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.815 [2024-11-20 05:33:05.145122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-20 05:33:05.145134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.815 [2024-11-20 05:33:05.149534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.815 [2024-11-20 05:33:05.149570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-20 05:33:05.149582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.815 [2024-11-20 05:33:05.154086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.815 [2024-11-20 05:33:05.154124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-20 05:33:05.154137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.815 [2024-11-20 05:33:05.158492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.815 [2024-11-20 05:33:05.158529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-20 05:33:05.158541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.815 [2024-11-20 05:33:05.163036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.815 [2024-11-20 05:33:05.163072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-20 05:33:05.163086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.815 [2024-11-20 05:33:05.167495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.815 [2024-11-20 05:33:05.167532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-20 05:33:05.167545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.815 [2024-11-20 05:33:05.172043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.815 [2024-11-20 05:33:05.172080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-20 05:33:05.172093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.815 [2024-11-20 05:33:05.176528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.815 [2024-11-20 05:33:05.176565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-20 05:33:05.176578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.815 [2024-11-20 05:33:05.181074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.815 [2024-11-20 05:33:05.181111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-20 05:33:05.181124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.815 [2024-11-20 05:33:05.185569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.815 [2024-11-20 05:33:05.185605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-20 05:33:05.185618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.815 [2024-11-20 05:33:05.190025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.815 [2024-11-20 05:33:05.190060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-20 05:33:05.190072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.815 [2024-11-20 05:33:05.194483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.815 [2024-11-20 05:33:05.194518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-20 05:33:05.194532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.815 [2024-11-20 05:33:05.199024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.815 [2024-11-20 05:33:05.199059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-20 05:33:05.199072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.815 [2024-11-20 05:33:05.203391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.815 [2024-11-20 05:33:05.203426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-20 05:33:05.203438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.815 [2024-11-20 05:33:05.208060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.815 [2024-11-20 05:33:05.208102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-20 05:33:05.208115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.815 [2024-11-20 05:33:05.212712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.815 [2024-11-20 05:33:05.212755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-20 05:33:05.212769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.816 [2024-11-20 05:33:05.217681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.816 [2024-11-20 05:33:05.217721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.816 [2024-11-20 05:33:05.217736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.816 [2024-11-20 05:33:05.222495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.816 [2024-11-20 05:33:05.222534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.816 [2024-11-20 05:33:05.222548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.816 [2024-11-20 05:33:05.227299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.816 [2024-11-20 05:33:05.227337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.816 [2024-11-20 05:33:05.227352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.816 [2024-11-20 05:33:05.231835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.816 [2024-11-20 05:33:05.231873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.816 [2024-11-20 05:33:05.231886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.816 [2024-11-20 05:33:05.236409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.816 [2024-11-20 05:33:05.236448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.816 [2024-11-20 05:33:05.236462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.816 [2024-11-20 05:33:05.240959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.816 [2024-11-20 05:33:05.240997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.816 [2024-11-20 05:33:05.241010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.816 [2024-11-20 05:33:05.245531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.816 [2024-11-20 05:33:05.245568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.816 [2024-11-20 05:33:05.245581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.816 [2024-11-20 05:33:05.250246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.816 [2024-11-20 05:33:05.250294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.816 [2024-11-20 05:33:05.250307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.816 [2024-11-20 05:33:05.254828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.816 [2024-11-20 05:33:05.254868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.816 [2024-11-20 05:33:05.254882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.816 [2024-11-20 05:33:05.259346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.816 [2024-11-20 05:33:05.259385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.816 [2024-11-20 05:33:05.259398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.816 [2024-11-20 05:33:05.263895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.816 [2024-11-20 05:33:05.263945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.816 [2024-11-20 05:33:05.263959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.816 [2024-11-20 05:33:05.268451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.816 [2024-11-20 05:33:05.268489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.816 [2024-11-20 05:33:05.268503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.816 [2024-11-20 05:33:05.272994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.816 [2024-11-20 05:33:05.273031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.816 [2024-11-20 05:33:05.273045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.816 [2024-11-20 05:33:05.277508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.816 [2024-11-20 05:33:05.277545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.816 [2024-11-20 05:33:05.277559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.816 [2024-11-20 05:33:05.281969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.816 [2024-11-20 05:33:05.282021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.816 [2024-11-20 05:33:05.282035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.816 [2024-11-20 05:33:05.286596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.816 [2024-11-20 05:33:05.286635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.816 [2024-11-20 05:33:05.286648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.816 [2024-11-20 05:33:05.291128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.816 [2024-11-20 05:33:05.291164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.816 [2024-11-20 05:33:05.291177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.816 [2024-11-20 05:33:05.295614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.816 [2024-11-20 05:33:05.295652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.816 [2024-11-20 05:33:05.295666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.816 [2024-11-20 05:33:05.300072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.816 [2024-11-20 05:33:05.300108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.816 [2024-11-20 05:33:05.300121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.816 [2024-11-20 05:33:05.304702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.816 [2024-11-20 05:33:05.304743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.816 [2024-11-20 05:33:05.304756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.816 [2024-11-20 05:33:05.309341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.816 [2024-11-20 05:33:05.309382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.816 [2024-11-20 05:33:05.309395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.816 [2024-11-20 05:33:05.313846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.817 [2024-11-20 05:33:05.313895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.817 [2024-11-20 05:33:05.313924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.817 [2024-11-20 05:33:05.318345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.817 [2024-11-20 05:33:05.318385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.817 [2024-11-20 05:33:05.318399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.817 [2024-11-20 05:33:05.322874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:50.817 [2024-11-20 05:33:05.322928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.817 [2024-11-20 05:33:05.322942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.077 [2024-11-20 05:33:05.327670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.077 [2024-11-20 05:33:05.327731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-20 05:33:05.327746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.077 [2024-11-20 05:33:05.332297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.077 [2024-11-20 05:33:05.332354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-20 05:33:05.332369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.077 [2024-11-20 05:33:05.336876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.077 [2024-11-20 05:33:05.336951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-20 05:33:05.336965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.077 [2024-11-20 05:33:05.341473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.077 [2024-11-20 05:33:05.341528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-20 05:33:05.341542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.077 [2024-11-20 05:33:05.346020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.077 [2024-11-20 05:33:05.346076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-20 05:33:05.346090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.077 [2024-11-20 05:33:05.350633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.077 [2024-11-20 05:33:05.350689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-20 05:33:05.350703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.077 6758.00 IOPS, 844.75 MiB/s [2024-11-20T05:33:05.590Z] [2024-11-20 05:33:05.356152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.077 [2024-11-20 05:33:05.356206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-20 05:33:05.356222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.077 [2024-11-20 05:33:05.360635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.077 [2024-11-20 05:33:05.360690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-20 05:33:05.360705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.077 [2024-11-20 05:33:05.365223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.077 [2024-11-20 05:33:05.365279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-20 05:33:05.365293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.077 [2024-11-20 05:33:05.369753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.077 [2024-11-20 05:33:05.369807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-20 05:33:05.369823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.077 [2024-11-20 05:33:05.374280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.077 [2024-11-20 05:33:05.374332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-20 05:33:05.374346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.077 [2024-11-20 05:33:05.378715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.077 [2024-11-20 05:33:05.378765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-20 05:33:05.378779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.077 [2024-11-20 05:33:05.383190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.077 [2024-11-20 05:33:05.383241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-20 05:33:05.383255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.077 [2024-11-20 05:33:05.387666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.077 [2024-11-20 05:33:05.387708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-20 05:33:05.387721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.077 [2024-11-20 05:33:05.392169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.077 [2024-11-20 05:33:05.392200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-20 05:33:05.392213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.077 [2024-11-20 05:33:05.396583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.077 [2024-11-20 05:33:05.396620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-20 05:33:05.396633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.077 [2024-11-20 05:33:05.400990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.078 [2024-11-20 05:33:05.401025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-20 05:33:05.401038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.078 [2024-11-20 05:33:05.405433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.078 [2024-11-20 05:33:05.405469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-20 05:33:05.405481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.078 [2024-11-20 05:33:05.409889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.078 [2024-11-20 05:33:05.409950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-20 05:33:05.409965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.078 [2024-11-20 05:33:05.414410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.078 [2024-11-20 05:33:05.414463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-20 05:33:05.414477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.078 [2024-11-20 05:33:05.418923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.078 [2024-11-20 05:33:05.418963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-20 05:33:05.418977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.078 [2024-11-20 05:33:05.423838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.078 [2024-11-20 05:33:05.423884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-20 05:33:05.423898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.078 [2024-11-20 05:33:05.428339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.078 [2024-11-20 05:33:05.428388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-20 05:33:05.428403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.078 [2024-11-20 05:33:05.432891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.078 [2024-11-20 05:33:05.432951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-20 05:33:05.432964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.078 [2024-11-20 05:33:05.437434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.078 [2024-11-20 05:33:05.437487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-20 05:33:05.437501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.078 [2024-11-20 05:33:05.441995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.078 [2024-11-20 05:33:05.442045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-20 05:33:05.442058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.078 [2024-11-20 05:33:05.446630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.078 [2024-11-20 05:33:05.446683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-20 05:33:05.446697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.078 [2024-11-20 05:33:05.451228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.078 [2024-11-20 05:33:05.451268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-20 05:33:05.451281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.078 [2024-11-20 05:33:05.455741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.078 [2024-11-20 05:33:05.455791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-20 05:33:05.455806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.078 [2024-11-20 05:33:05.460297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.078 [2024-11-20 05:33:05.460347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-20 05:33:05.460361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.078 [2024-11-20 05:33:05.464846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.078 [2024-11-20 05:33:05.464894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-20 05:33:05.464922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.078 [2024-11-20 05:33:05.469416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.078 [2024-11-20 05:33:05.469453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-20 05:33:05.469467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.078 [2024-11-20 05:33:05.473897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.078 [2024-11-20 05:33:05.473945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-20 05:33:05.473958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.078 [2024-11-20 05:33:05.478411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.078 [2024-11-20 05:33:05.478449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-20 05:33:05.478462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.078 [2024-11-20 05:33:05.482918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.078 [2024-11-20 05:33:05.482956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-20 05:33:05.482969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.078 [2024-11-20 05:33:05.487533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.078 [2024-11-20 05:33:05.487573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-20 05:33:05.487587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.078 [2024-11-20 05:33:05.492067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.078 [2024-11-20 05:33:05.492106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-20 05:33:05.492119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.078 [2024-11-20 05:33:05.496516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.078 [2024-11-20 05:33:05.496553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-20 05:33:05.496566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.078 [2024-11-20 05:33:05.500961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.078 [2024-11-20 05:33:05.500996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-20 05:33:05.501009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.079 [2024-11-20 05:33:05.505403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.079 [2024-11-20 05:33:05.505439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.079 [2024-11-20 05:33:05.505452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.079 [2024-11-20 05:33:05.509886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.079 [2024-11-20 05:33:05.509932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.079 [2024-11-20 05:33:05.509945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.079 [2024-11-20 05:33:05.514319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.079 [2024-11-20 05:33:05.514354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.079 [2024-11-20 05:33:05.514368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.079 [2024-11-20 05:33:05.518794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.079 [2024-11-20 05:33:05.518830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.079 [2024-11-20 05:33:05.518843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.079 [2024-11-20 05:33:05.523224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.079 [2024-11-20 05:33:05.523261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.079 [2024-11-20 05:33:05.523274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.079 [2024-11-20 05:33:05.527694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.079 [2024-11-20 05:33:05.527731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.079 [2024-11-20 05:33:05.527744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.079 [2024-11-20 05:33:05.532189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.079 [2024-11-20 05:33:05.532225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.079 [2024-11-20 05:33:05.532237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.079 [2024-11-20 05:33:05.536653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.079 [2024-11-20 05:33:05.536690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.079 [2024-11-20 05:33:05.536704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.079 [2024-11-20 05:33:05.541092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.079 [2024-11-20 05:33:05.541127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.079 [2024-11-20 05:33:05.541140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.079 [2024-11-20 05:33:05.545615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.079 [2024-11-20 05:33:05.545652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.079 [2024-11-20 05:33:05.545666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.079 [2024-11-20 05:33:05.550069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.079 [2024-11-20 05:33:05.550104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.079 [2024-11-20 05:33:05.550117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.079 [2024-11-20 05:33:05.554561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.079 [2024-11-20 05:33:05.554597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.079 [2024-11-20 05:33:05.554610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.079 [2024-11-20 05:33:05.559053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.079 [2024-11-20 05:33:05.559088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.079 [2024-11-20 05:33:05.559101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.079 [2024-11-20 05:33:05.563470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.079 [2024-11-20 05:33:05.563505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.079 [2024-11-20 05:33:05.563518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.079 [2024-11-20 05:33:05.567931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.079 [2024-11-20 05:33:05.567966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.079 [2024-11-20 05:33:05.567980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.079 [2024-11-20 05:33:05.572424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.079 [2024-11-20 05:33:05.572459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.079 [2024-11-20 05:33:05.572472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.079 [2024-11-20 05:33:05.576841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.079 [2024-11-20 05:33:05.576877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.079 [2024-11-20 05:33:05.576891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.079 [2024-11-20 05:33:05.581304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.079 [2024-11-20 05:33:05.581339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.079 [2024-11-20 05:33:05.581352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.079 [2024-11-20 05:33:05.585748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.079 [2024-11-20 05:33:05.585784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.079 [2024-11-20 05:33:05.585797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.340 [2024-11-20 05:33:05.590143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.340 [2024-11-20 05:33:05.590179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-20 05:33:05.590191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.340 [2024-11-20 05:33:05.594559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.340 [2024-11-20 05:33:05.594593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-20 05:33:05.594606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.340 [2024-11-20 05:33:05.599029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.340 [2024-11-20 05:33:05.599064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-20 05:33:05.599077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.340 [2024-11-20 05:33:05.603500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.340 [2024-11-20 05:33:05.603538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-20 05:33:05.603551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.340 [2024-11-20 05:33:05.608007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.340 [2024-11-20 05:33:05.608042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-20 05:33:05.608055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.340 [2024-11-20 05:33:05.612540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.340 [2024-11-20 05:33:05.612576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-20 05:33:05.612589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.340 [2024-11-20 05:33:05.617251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.340 [2024-11-20 05:33:05.617289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-20 05:33:05.617303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.340 [2024-11-20 05:33:05.621761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.340 [2024-11-20 05:33:05.621809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-20 05:33:05.621822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.340 [2024-11-20 05:33:05.626267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.340 [2024-11-20 05:33:05.626303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-20 05:33:05.626316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.340 [2024-11-20 05:33:05.630769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.340 [2024-11-20 05:33:05.630805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-20 05:33:05.630818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.340 [2024-11-20 05:33:05.635225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.340 [2024-11-20 05:33:05.635260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-20 05:33:05.635273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.340 [2024-11-20 05:33:05.639670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.340 [2024-11-20 05:33:05.639707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-20 05:33:05.639719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.340 [2024-11-20 05:33:05.644109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.340 [2024-11-20 05:33:05.644145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-20 05:33:05.644157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.340 [2024-11-20 05:33:05.648586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.340 [2024-11-20 05:33:05.648621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-20 05:33:05.648634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.340 [2024-11-20 05:33:05.653072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.340 [2024-11-20 05:33:05.653107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-20 05:33:05.653120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.340 [2024-11-20 05:33:05.657553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.340 [2024-11-20 05:33:05.657589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-20 05:33:05.657601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.340 [2024-11-20 05:33:05.662005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.340 [2024-11-20 05:33:05.662040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-20 05:33:05.662052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.341 [2024-11-20 05:33:05.666484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.341 [2024-11-20 05:33:05.666520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-20 05:33:05.666533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.341 [2024-11-20 05:33:05.670887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.341 [2024-11-20 05:33:05.670936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-20 05:33:05.670949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.341 [2024-11-20 05:33:05.675314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.341 [2024-11-20 05:33:05.675350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-20 05:33:05.675363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.341 [2024-11-20 05:33:05.679742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.341 [2024-11-20 05:33:05.679778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-20 05:33:05.679791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.341 [2024-11-20 05:33:05.684196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.341 [2024-11-20 05:33:05.684232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-20 05:33:05.684245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.341 [2024-11-20 05:33:05.688691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.341 [2024-11-20 05:33:05.688732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-20 05:33:05.688745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.341 [2024-11-20 05:33:05.693226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.341 [2024-11-20 05:33:05.693272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-20 05:33:05.693286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.341 [2024-11-20 05:33:05.697717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.341 [2024-11-20 05:33:05.697754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-20 05:33:05.697767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.341 [2024-11-20 05:33:05.702220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.341 [2024-11-20 05:33:05.702256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-20 05:33:05.702268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.341 [2024-11-20 05:33:05.706622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.341 [2024-11-20 05:33:05.706658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-20 05:33:05.706671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.341 [2024-11-20 05:33:05.711119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.341 [2024-11-20 05:33:05.711155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-20 05:33:05.711168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.341 [2024-11-20 05:33:05.715578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.341 [2024-11-20 05:33:05.715614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-20 05:33:05.715627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.341 [2024-11-20 05:33:05.720121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.341 [2024-11-20 05:33:05.720158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-20 05:33:05.720171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.341 [2024-11-20 05:33:05.724600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.341 [2024-11-20 05:33:05.724647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-20 05:33:05.724662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.341 [2024-11-20 05:33:05.729063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.341 [2024-11-20 05:33:05.729111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-20 05:33:05.729125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.341 [2024-11-20 05:33:05.733599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.341 [2024-11-20 05:33:05.733647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-20 05:33:05.733660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.341 [2024-11-20 05:33:05.738103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.341 [2024-11-20 05:33:05.738148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-20 05:33:05.738161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.341 [2024-11-20 05:33:05.742662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.341 [2024-11-20 05:33:05.742711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-20 05:33:05.742725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.341 [2024-11-20 05:33:05.747147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.341 [2024-11-20 05:33:05.747189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-20 05:33:05.747203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.341 [2024-11-20 05:33:05.751676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.341 [2024-11-20 05:33:05.751718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-20 05:33:05.751732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.341 [2024-11-20 05:33:05.756224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.341 [2024-11-20 05:33:05.756270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-20 05:33:05.756284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.341 [2024-11-20 05:33:05.760810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.341 [2024-11-20 05:33:05.760858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.342 [2024-11-20 05:33:05.760872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.342 [2024-11-20 05:33:05.765317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.342 [2024-11-20 05:33:05.765357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.342 [2024-11-20 05:33:05.765371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.342 [2024-11-20 05:33:05.769830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.342 [2024-11-20 05:33:05.769867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.342 [2024-11-20 05:33:05.769880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.342 [2024-11-20 05:33:05.774291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.342 [2024-11-20 05:33:05.774326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.342 [2024-11-20 05:33:05.774339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.342 [2024-11-20 05:33:05.778714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.342 [2024-11-20 05:33:05.778749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.342 [2024-11-20 05:33:05.778762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.342 [2024-11-20 05:33:05.783156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.342 [2024-11-20 05:33:05.783190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.342 [2024-11-20 05:33:05.783203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.342 [2024-11-20 05:33:05.787602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.342 [2024-11-20 05:33:05.787637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.342 [2024-11-20 05:33:05.787650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.342 [2024-11-20 05:33:05.792093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.342 [2024-11-20 05:33:05.792128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.342 [2024-11-20 05:33:05.792141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.342 [2024-11-20 05:33:05.796590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.342 [2024-11-20 05:33:05.796626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.342 [2024-11-20 05:33:05.796639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.342 [2024-11-20 05:33:05.801099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.342 [2024-11-20 05:33:05.801134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.342 [2024-11-20 05:33:05.801146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.342 [2024-11-20 05:33:05.805549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.342 [2024-11-20 05:33:05.805585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.342 [2024-11-20 05:33:05.805597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.342 [2024-11-20 05:33:05.810005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.342 [2024-11-20 05:33:05.810041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.342 [2024-11-20 05:33:05.810054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.342 [2024-11-20 05:33:05.814412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.342 [2024-11-20 05:33:05.814447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.342 [2024-11-20 05:33:05.814461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.342 [2024-11-20 05:33:05.818890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.342 [2024-11-20 05:33:05.818939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.342 [2024-11-20 05:33:05.818953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.342 [2024-11-20 05:33:05.823344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.342 [2024-11-20 05:33:05.823379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.342 [2024-11-20 05:33:05.823392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.342 [2024-11-20 05:33:05.827747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.342 [2024-11-20 05:33:05.827790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.342 [2024-11-20 05:33:05.827803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.342 [2024-11-20 05:33:05.832147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.342 [2024-11-20 05:33:05.832183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.342 [2024-11-20 05:33:05.832196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.342 [2024-11-20 05:33:05.836595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.342 [2024-11-20 05:33:05.836630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.342 [2024-11-20 05:33:05.836642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.342 [2024-11-20 05:33:05.841071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.342 [2024-11-20 05:33:05.841106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.342 [2024-11-20 05:33:05.841119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.342 [2024-11-20 05:33:05.845620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.342 [2024-11-20 05:33:05.845660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.342 [2024-11-20 05:33:05.845673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.602 [2024-11-20 05:33:05.850061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.602 [2024-11-20 05:33:05.850096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.602 [2024-11-20 05:33:05.850108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.602 [2024-11-20 05:33:05.854544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.602 [2024-11-20 05:33:05.854580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.602 [2024-11-20 05:33:05.854593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.602 [2024-11-20 05:33:05.859082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.602 [2024-11-20 05:33:05.859117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.602 [2024-11-20 05:33:05.859130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.602 [2024-11-20 05:33:05.863560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.602 [2024-11-20 05:33:05.863596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.602 [2024-11-20 05:33:05.863609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.602 [2024-11-20 05:33:05.868064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.602 [2024-11-20 05:33:05.868099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.602 [2024-11-20 05:33:05.868111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.602 [2024-11-20 05:33:05.872451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.602 [2024-11-20 05:33:05.872486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.602 [2024-11-20 05:33:05.872500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.602 [2024-11-20 05:33:05.876863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.602 [2024-11-20 05:33:05.876899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.602 [2024-11-20 05:33:05.876924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.602 [2024-11-20 05:33:05.881318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.602 [2024-11-20 05:33:05.881356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.602 [2024-11-20 05:33:05.881369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.602 [2024-11-20 05:33:05.885796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.602 [2024-11-20 05:33:05.885832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.602 [2024-11-20 05:33:05.885845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.602 [2024-11-20 05:33:05.890240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.602 [2024-11-20 05:33:05.890276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.602 [2024-11-20 05:33:05.890289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.603 [2024-11-20 05:33:05.894615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.603 [2024-11-20 05:33:05.894651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-20 05:33:05.894664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.603 [2024-11-20 05:33:05.899031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.603 [2024-11-20 05:33:05.899066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-20 05:33:05.899079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.603 [2024-11-20 05:33:05.903498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.603 [2024-11-20 05:33:05.903533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-20 05:33:05.903546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.603 [2024-11-20 05:33:05.907970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.603 [2024-11-20 05:33:05.908016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-20 05:33:05.908028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.603 [2024-11-20 05:33:05.912406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.603 [2024-11-20 05:33:05.912442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-20 05:33:05.912455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.603 [2024-11-20 05:33:05.916848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.603 [2024-11-20 05:33:05.916885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-20 05:33:05.916898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.603 [2024-11-20 05:33:05.921369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.603 [2024-11-20 05:33:05.921405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-20 05:33:05.921418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.603 [2024-11-20 05:33:05.925861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.603 [2024-11-20 05:33:05.925897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-20 05:33:05.925929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.603 [2024-11-20 05:33:05.930380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.603 [2024-11-20 05:33:05.930417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-20 05:33:05.930430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.603 [2024-11-20 05:33:05.934795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.603 [2024-11-20 05:33:05.934832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-20 05:33:05.934845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.603 [2024-11-20 05:33:05.939312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.603 [2024-11-20 05:33:05.939350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-20 05:33:05.939368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.603 [2024-11-20 05:33:05.944438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.603 [2024-11-20 05:33:05.944479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-20 05:33:05.944493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.603 [2024-11-20 05:33:05.949157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.603 [2024-11-20 05:33:05.949196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-20 05:33:05.949211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.603 [2024-11-20 05:33:05.953566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.603 [2024-11-20 05:33:05.953602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-20 05:33:05.953615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.603 [2024-11-20 05:33:05.957964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.603 [2024-11-20 05:33:05.958000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-20 05:33:05.958013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.603 [2024-11-20 05:33:05.963158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.603 [2024-11-20 05:33:05.963196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-20 05:33:05.963209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.603 [2024-11-20 05:33:05.968578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.603 [2024-11-20 05:33:05.968614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-20 05:33:05.968627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.603 [2024-11-20 05:33:05.974083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.603 [2024-11-20 05:33:05.974117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-20 05:33:05.974130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.603 [2024-11-20 05:33:05.979473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.603 [2024-11-20 05:33:05.979508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-20 05:33:05.979520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.603 [2024-11-20 05:33:05.984894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.603 [2024-11-20 05:33:05.984939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-20 05:33:05.984951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.603 [2024-11-20 05:33:05.990334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.603 [2024-11-20 05:33:05.990367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-20 05:33:05.990380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.603 [2024-11-20 05:33:05.995779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.603 [2024-11-20 05:33:05.995814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-20 05:33:05.995835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.603 [2024-11-20 05:33:06.001289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.603 [2024-11-20 05:33:06.001323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-20 05:33:06.001335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.603 [2024-11-20 05:33:06.006695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.603 [2024-11-20 05:33:06.006729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-20 05:33:06.006741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.603 [2024-11-20 05:33:06.012210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.603 [2024-11-20 05:33:06.012244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-20 05:33:06.012257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.603 [2024-11-20 05:33:06.017618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.603 [2024-11-20 05:33:06.017652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-20 05:33:06.017664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.603 [2024-11-20 05:33:06.023109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.603 [2024-11-20 05:33:06.023145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-20 05:33:06.023158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.604 [2024-11-20 05:33:06.028483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.604 [2024-11-20 05:33:06.028516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-20 05:33:06.028529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.604 [2024-11-20 05:33:06.033825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.604 [2024-11-20 05:33:06.033859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-20 05:33:06.033872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.604 [2024-11-20 05:33:06.039298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.604 [2024-11-20 05:33:06.039332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-20 05:33:06.039343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.604 [2024-11-20 05:33:06.043726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.604 [2024-11-20 05:33:06.043763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-20 05:33:06.043776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.604 [2024-11-20 05:33:06.048201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.604 [2024-11-20 05:33:06.048237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-20 05:33:06.048249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.604 [2024-11-20 05:33:06.052730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.604 [2024-11-20 05:33:06.052767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-20 05:33:06.052780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.604 [2024-11-20 05:33:06.057275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.604 [2024-11-20 05:33:06.057311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-20 05:33:06.057325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.604 [2024-11-20 05:33:06.061754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.604 [2024-11-20 05:33:06.061791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-20 05:33:06.061804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.604 [2024-11-20 05:33:06.066282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.604 [2024-11-20 05:33:06.066318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-20 05:33:06.066331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.604 [2024-11-20 05:33:06.070790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.604 [2024-11-20 05:33:06.070826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-20 05:33:06.070839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.604 [2024-11-20 05:33:06.075263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.604 [2024-11-20 05:33:06.075300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-20 05:33:06.075313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.604 [2024-11-20 05:33:06.079768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.604 [2024-11-20 05:33:06.079804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-20 05:33:06.079817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.604 [2024-11-20 05:33:06.084233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.604 [2024-11-20 05:33:06.084269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-20 05:33:06.084282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.604 [2024-11-20 05:33:06.088640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.604 [2024-11-20 05:33:06.088676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-20 05:33:06.088689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.604 [2024-11-20 05:33:06.093042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.604 [2024-11-20 05:33:06.093079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-20 05:33:06.093091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.604 [2024-11-20 05:33:06.097567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.604 [2024-11-20 05:33:06.097602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-20 05:33:06.097615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.604 [2024-11-20 05:33:06.102062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.604 [2024-11-20 05:33:06.102097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-20 05:33:06.102110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.604 [2024-11-20 05:33:06.106502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.604 [2024-11-20 05:33:06.106537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-20 05:33:06.106550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.604 [2024-11-20 05:33:06.111020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.604 [2024-11-20 05:33:06.111066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-20 05:33:06.111083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.864 [2024-11-20 05:33:06.115529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.864 [2024-11-20 05:33:06.115568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.864 [2024-11-20 05:33:06.115581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.864 [2024-11-20 05:33:06.120084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.864 [2024-11-20 05:33:06.120121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.864 [2024-11-20 05:33:06.120134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.864 [2024-11-20 05:33:06.124613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.864 [2024-11-20 05:33:06.124649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.864 [2024-11-20 05:33:06.124662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.864 [2024-11-20 05:33:06.129039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.864 [2024-11-20 05:33:06.129075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.864 [2024-11-20 05:33:06.129088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.864 [2024-11-20 05:33:06.133529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.864 [2024-11-20 05:33:06.133565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.864 [2024-11-20 05:33:06.133579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.864 [2024-11-20 05:33:06.138013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.864 [2024-11-20 05:33:06.138048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.864 [2024-11-20 05:33:06.138061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.864 [2024-11-20 05:33:06.142502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.864 [2024-11-20 05:33:06.142539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.864 [2024-11-20 05:33:06.142552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.864 [2024-11-20 05:33:06.147021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.864 [2024-11-20 05:33:06.147059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.864 [2024-11-20 05:33:06.147072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.864 [2024-11-20 05:33:06.151464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.864 [2024-11-20 05:33:06.151499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.864 [2024-11-20 05:33:06.151512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.864 [2024-11-20 05:33:06.155947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.864 [2024-11-20 05:33:06.155994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.864 [2024-11-20 05:33:06.156007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.864 [2024-11-20 05:33:06.160406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.864 [2024-11-20 05:33:06.160442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.864 [2024-11-20 05:33:06.160455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.864 [2024-11-20 05:33:06.164762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.864 [2024-11-20 05:33:06.164797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.864 [2024-11-20 05:33:06.164810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.864 [2024-11-20 05:33:06.169184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.864 [2024-11-20 05:33:06.169221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.864 [2024-11-20 05:33:06.169233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.864 [2024-11-20 05:33:06.173659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.864 [2024-11-20 05:33:06.173695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.864 [2024-11-20 05:33:06.173708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.864 [2024-11-20 05:33:06.178171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.865 [2024-11-20 05:33:06.178207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.865 [2024-11-20 05:33:06.178220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.865 [2024-11-20 05:33:06.182602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.865 [2024-11-20 05:33:06.182650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.865 [2024-11-20 05:33:06.182667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.865 [2024-11-20 05:33:06.187216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.865 [2024-11-20 05:33:06.187258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.865 [2024-11-20 05:33:06.187272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.865 [2024-11-20 05:33:06.191711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.865 [2024-11-20 05:33:06.191750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.865 [2024-11-20 05:33:06.191763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.865 [2024-11-20 05:33:06.196217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.865 [2024-11-20 05:33:06.196252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.865 [2024-11-20 05:33:06.196264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.865 [2024-11-20 05:33:06.200724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.865 [2024-11-20 05:33:06.200759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.865 [2024-11-20 05:33:06.200772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.865 [2024-11-20 05:33:06.205237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.865 [2024-11-20 05:33:06.205273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.865 [2024-11-20 05:33:06.205287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.865 [2024-11-20 05:33:06.209667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.865 [2024-11-20 05:33:06.209703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.865 [2024-11-20 05:33:06.209716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.865 [2024-11-20 05:33:06.214145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.865 [2024-11-20 05:33:06.214187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.865 [2024-11-20 05:33:06.214201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.865 [2024-11-20 05:33:06.218618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.865 [2024-11-20 05:33:06.218654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.865 [2024-11-20 05:33:06.218667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.865 [2024-11-20 05:33:06.223177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.865 [2024-11-20 05:33:06.223221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.865 [2024-11-20 05:33:06.223235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.865 [2024-11-20 05:33:06.227712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.865 [2024-11-20 05:33:06.227765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.865 [2024-11-20 05:33:06.227778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.865 [2024-11-20 05:33:06.232406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.865 [2024-11-20 05:33:06.232451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.865 [2024-11-20 05:33:06.232465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.865 [2024-11-20 05:33:06.236933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.865 [2024-11-20 05:33:06.236983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.865 [2024-11-20 05:33:06.236998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.865 [2024-11-20 05:33:06.241449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.865 [2024-11-20 05:33:06.241491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.865 [2024-11-20 05:33:06.241505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.865 [2024-11-20 05:33:06.246117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.865 [2024-11-20 05:33:06.246166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.865 [2024-11-20 05:33:06.246180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.865 [2024-11-20 05:33:06.250730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.865 [2024-11-20 05:33:06.250775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.865 [2024-11-20 05:33:06.250789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.865 [2024-11-20 05:33:06.255302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.865 [2024-11-20 05:33:06.255346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.865 [2024-11-20 05:33:06.255360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.865 [2024-11-20 05:33:06.259817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.865 [2024-11-20 05:33:06.259878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.865 [2024-11-20 05:33:06.259891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.865 [2024-11-20 05:33:06.264396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.865 [2024-11-20 05:33:06.264436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.865 [2024-11-20 05:33:06.264449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.865 [2024-11-20 05:33:06.269008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.865 [2024-11-20 05:33:06.269062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.865 [2024-11-20 05:33:06.269076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.865 [2024-11-20 05:33:06.273578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.865 [2024-11-20 05:33:06.273617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.865 [2024-11-20 05:33:06.273630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.865 [2024-11-20 05:33:06.278129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.865 [2024-11-20 05:33:06.278184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.865 [2024-11-20 05:33:06.278198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.865 [2024-11-20 05:33:06.282703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.866 [2024-11-20 05:33:06.282746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-20 05:33:06.282760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.866 [2024-11-20 05:33:06.287334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.866 [2024-11-20 05:33:06.287388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-20 05:33:06.287402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.866 [2024-11-20 05:33:06.291886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.866 [2024-11-20 05:33:06.291935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-20 05:33:06.291948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.866 [2024-11-20 05:33:06.296429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.866 [2024-11-20 05:33:06.296481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-20 05:33:06.296495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.866 [2024-11-20 05:33:06.300967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.866 [2024-11-20 05:33:06.301007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-20 05:33:06.301020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.866 [2024-11-20 05:33:06.305535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.866 [2024-11-20 05:33:06.305585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-20 05:33:06.305599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.866 [2024-11-20 05:33:06.310080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.866 [2024-11-20 05:33:06.310124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-20 05:33:06.310138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.866 [2024-11-20 05:33:06.314606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.866 [2024-11-20 05:33:06.314647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-20 05:33:06.314662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.866 [2024-11-20 05:33:06.319078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.866 [2024-11-20 05:33:06.319124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-20 05:33:06.319138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.866 [2024-11-20 05:33:06.323552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.866 [2024-11-20 05:33:06.323593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-20 05:33:06.323608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.866 [2024-11-20 05:33:06.328053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.866 [2024-11-20 05:33:06.328105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-20 05:33:06.328118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.866 [2024-11-20 05:33:06.332545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.866 [2024-11-20 05:33:06.332583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-20 05:33:06.332597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.866 [2024-11-20 05:33:06.337106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.866 [2024-11-20 05:33:06.337155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-20 05:33:06.337169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.866 [2024-11-20 05:33:06.341615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.866 [2024-11-20 05:33:06.341657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-20 05:33:06.341671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.866 [2024-11-20 05:33:06.346126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.866 [2024-11-20 05:33:06.346167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-20 05:33:06.346180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.866 [2024-11-20 05:33:06.350555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.866 [2024-11-20 05:33:06.350596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-20 05:33:06.350609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.866 6773.50 IOPS, 846.69 MiB/s [2024-11-20T05:33:06.379Z] [2024-11-20 05:33:06.356234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bf400) 00:22:51.866 [2024-11-20 05:33:06.356275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-20 05:33:06.356288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.866 00:22:51.866 Latency(us) 00:22:51.866 [2024-11-20T05:33:06.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.866 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:51.866 nvme0n1 : 2.00 6769.85 846.23 0.00 0.00 2359.50 2085.24 10962.39 00:22:51.866 [2024-11-20T05:33:06.379Z] =================================================================================================================== 00:22:51.866 [2024-11-20T05:33:06.379Z] Total : 6769.85 846.23 0.00 0.00 2359.50 2085.24 10962.39 00:22:51.866 { 00:22:51.866 "results": [ 00:22:51.866 { 00:22:51.866 "job": "nvme0n1", 00:22:51.866 "core_mask": "0x2", 00:22:51.866 "workload": "randread", 00:22:51.866 "status": "finished", 00:22:51.866 "queue_depth": 16, 00:22:51.866 "io_size": 131072, 00:22:51.866 "runtime": 2.003441, 00:22:51.866 "iops": 6769.852468827383, 00:22:51.866 "mibps": 846.2315586034229, 00:22:51.866 "io_failed": 0, 00:22:51.866 "io_timeout": 0, 00:22:51.866 "avg_latency_us": 2359.5027863237547, 00:22:51.866 "min_latency_us": 2085.2363636363634, 00:22:51.866 "max_latency_us": 10962.385454545454 00:22:51.866 } 00:22:51.866 ], 00:22:51.867 "core_count": 1 00:22:51.867 } 00:22:52.125 05:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:52.125 05:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:52.125 05:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:52.125 05:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:52.125 | .driver_specific 00:22:52.125 | .nvme_error 00:22:52.125 | .status_code 00:22:52.125 | .command_transient_transport_error' 00:22:52.384 05:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 438 > 0 )) 00:22:52.384 05:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80824 00:22:52.384 05:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 80824 ']' 00:22:52.384 05:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 80824 00:22:52.384 05:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:22:52.384 05:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:52.384 05:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80824 00:22:52.384 05:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:52.384 killing process with pid 80824 00:22:52.384 05:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:52.384 05:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80824' 00:22:52.384 Received shutdown signal, test time was about 2.000000 seconds 00:22:52.384 00:22:52.384 Latency(us) 00:22:52.384 [2024-11-20T05:33:06.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.384 [2024-11-20T05:33:06.897Z] =================================================================================================================== 00:22:52.384 [2024-11-20T05:33:06.897Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:52.384 05:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 80824 00:22:52.384 05:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 80824 00:22:52.384 05:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:22:52.384 05:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:52.384 05:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:22:52.384 05:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:22:52.384 05:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:22:52.384 05:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80877 00:22:52.384 05:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:22:52.384 05:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80877 /var/tmp/bperf.sock 00:22:52.384 05:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 80877 ']' 00:22:52.384 05:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:52.384 05:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:52.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:52.384 05:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:52.384 05:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:52.384 05:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:52.643 [2024-11-20 05:33:06.937750] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:22:52.643 [2024-11-20 05:33:06.937841] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80877 ] 00:22:52.643 [2024-11-20 05:33:07.085945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.643 [2024-11-20 05:33:07.125128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.902 [2024-11-20 05:33:07.157835] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:52.902 05:33:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:52.902 05:33:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:22:52.902 05:33:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:52.902 05:33:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:53.161 05:33:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:53.161 05:33:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.161 05:33:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:53.161 05:33:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.161 05:33:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:53.161 05:33:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:53.436 nvme0n1 00:22:53.436 05:33:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:53.436 05:33:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.436 05:33:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:53.436 05:33:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.436 05:33:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:53.437 05:33:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:53.703 Running I/O for 2 seconds... 00:22:53.703 [2024-11-20 05:33:08.071044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016efb048 00:22:53.703 [2024-11-20 05:33:08.072586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.703 [2024-11-20 05:33:08.072628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:53.703 [2024-11-20 05:33:08.088069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016efb8b8 00:22:53.703 [2024-11-20 05:33:08.089533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.703 [2024-11-20 05:33:08.089571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.703 [2024-11-20 05:33:08.105835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016efc128 00:22:53.703 [2024-11-20 05:33:08.107353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.703 [2024-11-20 05:33:08.107408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:53.703 [2024-11-20 05:33:08.124384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016efc998 00:22:53.703 [2024-11-20 05:33:08.125837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.703 [2024-11-20 05:33:08.125890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:53.703 [2024-11-20 05:33:08.142911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016efd208 00:22:53.704 [2024-11-20 05:33:08.144348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.704 [2024-11-20 05:33:08.144405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:53.704 [2024-11-20 05:33:08.161706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016efda78 00:22:53.704 [2024-11-20 05:33:08.163113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.704 [2024-11-20 05:33:08.163161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:53.704 [2024-11-20 05:33:08.180447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016efe2e8 00:22:53.704 [2024-11-20 05:33:08.181841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.704 [2024-11-20 05:33:08.181896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:53.704 [2024-11-20 05:33:08.199380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016efeb58 00:22:53.704 [2024-11-20 05:33:08.200820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.704 [2024-11-20 05:33:08.200886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:53.962 [2024-11-20 05:33:08.228377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016efef90 00:22:53.962 [2024-11-20 05:33:08.231000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.962 [2024-11-20 05:33:08.231042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:53.962 [2024-11-20 05:33:08.245141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016efeb58 00:22:53.962 [2024-11-20 05:33:08.247730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.962 [2024-11-20 05:33:08.247766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:53.962 [2024-11-20 05:33:08.261869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016efe2e8 00:22:53.962 [2024-11-20 05:33:08.264456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.962 [2024-11-20 05:33:08.264493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:53.962 [2024-11-20 05:33:08.278643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016efda78 00:22:53.962 [2024-11-20 05:33:08.281222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.962 [2024-11-20 05:33:08.281258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:53.962 [2024-11-20 05:33:08.295423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016efd208 00:22:53.962 [2024-11-20 05:33:08.298008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.962 [2024-11-20 05:33:08.298044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:53.962 [2024-11-20 05:33:08.312446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016efc998 00:22:53.962 [2024-11-20 05:33:08.315013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.962 [2024-11-20 05:33:08.315052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:53.962 [2024-11-20 05:33:08.329323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016efc128 00:22:53.962 [2024-11-20 05:33:08.331806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.962 [2024-11-20 05:33:08.331851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:53.962 [2024-11-20 05:33:08.346222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016efb8b8 00:22:53.962 [2024-11-20 05:33:08.348772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.962 [2024-11-20 05:33:08.348811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:53.963 [2024-11-20 05:33:08.363293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016efb048 00:22:53.963 [2024-11-20 05:33:08.365791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.963 [2024-11-20 05:33:08.365830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:53.963 [2024-11-20 05:33:08.380354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016efa7d8 00:22:53.963 [2024-11-20 05:33:08.382848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.963 [2024-11-20 05:33:08.382891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:53.963 [2024-11-20 05:33:08.397358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ef9f68 00:22:53.963 [2024-11-20 05:33:08.399774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.963 [2024-11-20 05:33:08.399811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:53.963 [2024-11-20 05:33:08.414041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ef96f8 00:22:53.963 [2024-11-20 05:33:08.416441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.963 [2024-11-20 05:33:08.416477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:53.963 [2024-11-20 05:33:08.430709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ef8e88 00:22:53.963 [2024-11-20 05:33:08.433089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.963 [2024-11-20 05:33:08.433124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:53.963 [2024-11-20 05:33:08.447615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ef8618 00:22:53.963 [2024-11-20 05:33:08.449992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.963 [2024-11-20 05:33:08.450029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:53.963 [2024-11-20 05:33:08.464314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ef7da8 00:22:53.963 [2024-11-20 05:33:08.466647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.963 [2024-11-20 05:33:08.466681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 05:33:08.481152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ef7538 00:22:54.222 [2024-11-20 05:33:08.483472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.222 [2024-11-20 05:33:08.483506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 05:33:08.497834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ef6cc8 00:22:54.222 [2024-11-20 05:33:08.500156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.222 [2024-11-20 05:33:08.500191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 05:33:08.514575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ef6458 00:22:54.222 [2024-11-20 05:33:08.516897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.222 [2024-11-20 05:33:08.516943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 05:33:08.531275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ef5be8 00:22:54.222 [2024-11-20 05:33:08.533531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.222 [2024-11-20 05:33:08.533563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 05:33:08.547989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ef5378 00:22:54.222 [2024-11-20 05:33:08.550231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.222 [2024-11-20 05:33:08.550267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 05:33:08.564735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ef4b08 00:22:54.222 [2024-11-20 05:33:08.566963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.222 [2024-11-20 05:33:08.566998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 05:33:08.581496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ef4298 00:22:54.222 [2024-11-20 05:33:08.583708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.222 [2024-11-20 05:33:08.583743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 05:33:08.598380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ef3a28 00:22:54.222 [2024-11-20 05:33:08.600605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.222 [2024-11-20 05:33:08.600648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 05:33:08.616144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ef31b8 00:22:54.222 [2024-11-20 05:33:08.618357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.222 [2024-11-20 05:33:08.618397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 05:33:08.633310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ef2948 00:22:54.222 [2024-11-20 05:33:08.635457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.222 [2024-11-20 05:33:08.635492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 05:33:08.650319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ef20d8 00:22:54.222 [2024-11-20 05:33:08.652989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.222 [2024-11-20 05:33:08.653026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 05:33:08.668930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ef1868 00:22:54.222 [2024-11-20 05:33:08.671043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.222 [2024-11-20 05:33:08.671079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 05:33:08.685628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ef0ff8 00:22:54.222 [2024-11-20 05:33:08.687732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.222 [2024-11-20 05:33:08.687770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 05:33:08.702467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ef0788 00:22:54.222 [2024-11-20 05:33:08.704534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.222 [2024-11-20 05:33:08.704567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 05:33:08.719184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016eeff18 00:22:54.222 [2024-11-20 05:33:08.721224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.222 [2024-11-20 05:33:08.721259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:54.482 [2024-11-20 05:33:08.736427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016eef6a8 00:22:54.482 [2024-11-20 05:33:08.738438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.482 [2024-11-20 05:33:08.738470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:54.482 [2024-11-20 05:33:08.753264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016eeee38 00:22:54.482 [2024-11-20 05:33:08.755295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.482 [2024-11-20 05:33:08.755330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:54.482 [2024-11-20 05:33:08.770681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016eee5c8 00:22:54.482 [2024-11-20 05:33:08.772682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.482 [2024-11-20 05:33:08.772719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:54.482 [2024-11-20 05:33:08.788375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016eedd58 00:22:54.482 [2024-11-20 05:33:08.790587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.482 [2024-11-20 05:33:08.790622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:54.482 [2024-11-20 05:33:08.806417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016eed4e8 00:22:54.482 [2024-11-20 05:33:08.808576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.482 [2024-11-20 05:33:08.808610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:54.482 [2024-11-20 05:33:08.824660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016eecc78 00:22:54.482 [2024-11-20 05:33:08.826854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.482 [2024-11-20 05:33:08.826896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:54.482 [2024-11-20 05:33:08.843443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016eec408 00:22:54.482 [2024-11-20 05:33:08.845423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.482 [2024-11-20 05:33:08.845462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:54.482 [2024-11-20 05:33:08.860744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016eebb98 00:22:54.482 [2024-11-20 05:33:08.862673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.482 [2024-11-20 05:33:08.862713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:54.482 [2024-11-20 05:33:08.878096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016eeb328 00:22:54.482 [2024-11-20 05:33:08.880025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.482 [2024-11-20 05:33:08.880065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:54.482 [2024-11-20 05:33:08.895374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016eeaab8 00:22:54.482 [2024-11-20 05:33:08.897273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.482 [2024-11-20 05:33:08.897314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:54.482 [2024-11-20 05:33:08.912416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016eea248 00:22:54.482 [2024-11-20 05:33:08.914240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.482 [2024-11-20 05:33:08.914287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:54.482 [2024-11-20 05:33:08.929507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee99d8 00:22:54.482 [2024-11-20 05:33:08.931333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.482 [2024-11-20 05:33:08.931368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:54.482 [2024-11-20 05:33:08.946786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee9168 00:22:54.482 [2024-11-20 05:33:08.948619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.482 [2024-11-20 05:33:08.948664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:54.482 [2024-11-20 05:33:08.964152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee88f8 00:22:54.482 [2024-11-20 05:33:08.965947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.482 [2024-11-20 05:33:08.965986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:54.482 [2024-11-20 05:33:08.981389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee8088 00:22:54.482 [2024-11-20 05:33:08.983150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.482 [2024-11-20 05:33:08.983191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:54.741 [2024-11-20 05:33:08.998651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee7818 00:22:54.741 [2024-11-20 05:33:09.000423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.741 [2024-11-20 05:33:09.000464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:54.741 [2024-11-20 05:33:09.015514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee6fa8 00:22:54.741 [2024-11-20 05:33:09.017223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.741 [2024-11-20 05:33:09.017261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:54.741 [2024-11-20 05:33:09.032856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee6738 00:22:54.741 [2024-11-20 05:33:09.034561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.741 [2024-11-20 05:33:09.034605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:54.741 [2024-11-20 05:33:09.049580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee5ec8 00:22:54.741 [2024-11-20 05:33:09.051236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.741 [2024-11-20 05:33:09.051272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:54.741 14549.00 IOPS, 56.83 MiB/s [2024-11-20T05:33:09.254Z] [2024-11-20 05:33:09.066235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee5658 00:22:54.741 [2024-11-20 05:33:09.067857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.741 [2024-11-20 05:33:09.067894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:54.741 [2024-11-20 05:33:09.083464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee4de8 00:22:54.741 [2024-11-20 05:33:09.085109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.741 [2024-11-20 05:33:09.085153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:54.741 [2024-11-20 05:33:09.100454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee4578 00:22:54.741 [2024-11-20 05:33:09.102081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.741 [2024-11-20 05:33:09.102121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:54.741 [2024-11-20 05:33:09.117714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee3d08 00:22:54.741 [2024-11-20 05:33:09.119346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.741 [2024-11-20 05:33:09.119387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:54.741 [2024-11-20 05:33:09.135411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee3498 00:22:54.741 [2024-11-20 05:33:09.137025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.741 [2024-11-20 05:33:09.137070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:54.741 [2024-11-20 05:33:09.152561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee2c28 00:22:54.741 [2024-11-20 05:33:09.154107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.741 [2024-11-20 05:33:09.154147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:54.741 [2024-11-20 05:33:09.169418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee23b8 00:22:54.741 [2024-11-20 05:33:09.170937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.741 [2024-11-20 05:33:09.170976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:54.741 [2024-11-20 05:33:09.186187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee1b48 00:22:54.741 [2024-11-20 05:33:09.187672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.741 [2024-11-20 05:33:09.187709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.741 [2024-11-20 05:33:09.203093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee12d8 00:22:54.742 [2024-11-20 05:33:09.204607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.742 [2024-11-20 05:33:09.204659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:54.742 [2024-11-20 05:33:09.219965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee0a68 00:22:54.742 [2024-11-20 05:33:09.221423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.742 [2024-11-20 05:33:09.221459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:54.742 [2024-11-20 05:33:09.236744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee01f8 00:22:54.742 [2024-11-20 05:33:09.238188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.742 [2024-11-20 05:33:09.238223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:55.001 [2024-11-20 05:33:09.253562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016edf988 00:22:55.001 [2024-11-20 05:33:09.254977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.001 [2024-11-20 05:33:09.255016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:55.001 [2024-11-20 05:33:09.270454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016edf118 00:22:55.001 [2024-11-20 05:33:09.271878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.001 [2024-11-20 05:33:09.271930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:55.001 [2024-11-20 05:33:09.287317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ede8a8 00:22:55.001 [2024-11-20 05:33:09.288820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.001 [2024-11-20 05:33:09.288874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:55.001 [2024-11-20 05:33:09.304687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ede038 00:22:55.001 [2024-11-20 05:33:09.306072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.001 [2024-11-20 05:33:09.306121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:55.001 [2024-11-20 05:33:09.328564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ede038 00:22:55.001 [2024-11-20 05:33:09.331613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.001 [2024-11-20 05:33:09.331656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:55.001 [2024-11-20 05:33:09.346766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ede8a8 00:22:55.001 [2024-11-20 05:33:09.349419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.001 [2024-11-20 05:33:09.349461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:55.001 [2024-11-20 05:33:09.363606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016edf118 00:22:55.001 [2024-11-20 05:33:09.366235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.001 [2024-11-20 05:33:09.366275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:55.001 [2024-11-20 05:33:09.381314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016edf988 00:22:55.001 [2024-11-20 05:33:09.383939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.001 [2024-11-20 05:33:09.383979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:55.001 [2024-11-20 05:33:09.398744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee01f8 00:22:55.001 [2024-11-20 05:33:09.401361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.001 [2024-11-20 05:33:09.401404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:55.001 [2024-11-20 05:33:09.416237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee0a68 00:22:55.001 [2024-11-20 05:33:09.418812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.001 [2024-11-20 05:33:09.418851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:55.001 [2024-11-20 05:33:09.433408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee12d8 00:22:55.001 [2024-11-20 05:33:09.435947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.001 [2024-11-20 05:33:09.435986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:55.001 [2024-11-20 05:33:09.450359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee1b48 00:22:55.001 [2024-11-20 05:33:09.452874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.001 [2024-11-20 05:33:09.452925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.001 [2024-11-20 05:33:09.467346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee23b8 00:22:55.001 [2024-11-20 05:33:09.470188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.001 [2024-11-20 05:33:09.470229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:55.001 [2024-11-20 05:33:09.485456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee2c28 00:22:55.001 [2024-11-20 05:33:09.488137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.001 [2024-11-20 05:33:09.488178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:55.001 [2024-11-20 05:33:09.502882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee3498 00:22:55.001 [2024-11-20 05:33:09.505401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.001 [2024-11-20 05:33:09.505442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:55.260 [2024-11-20 05:33:09.522194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee3d08 00:22:55.260 [2024-11-20 05:33:09.524651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.260 [2024-11-20 05:33:09.524693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:55.260 [2024-11-20 05:33:09.539100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee4578 00:22:55.260 [2024-11-20 05:33:09.541655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.260 [2024-11-20 05:33:09.541696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:55.260 [2024-11-20 05:33:09.556372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee4de8 00:22:55.260 [2024-11-20 05:33:09.558723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.260 [2024-11-20 05:33:09.558762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:55.260 [2024-11-20 05:33:09.573197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee5658 00:22:55.260 [2024-11-20 05:33:09.575815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.260 [2024-11-20 05:33:09.575877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:55.260 [2024-11-20 05:33:09.590335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee5ec8 00:22:55.260 [2024-11-20 05:33:09.592667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.260 [2024-11-20 05:33:09.592722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:55.260 [2024-11-20 05:33:09.607426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee6738 00:22:55.260 [2024-11-20 05:33:09.609724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.260 [2024-11-20 05:33:09.609762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:55.260 [2024-11-20 05:33:09.626837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee6fa8 00:22:55.260 [2024-11-20 05:33:09.629176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.260 [2024-11-20 05:33:09.629220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:55.260 [2024-11-20 05:33:09.645251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee7818 00:22:55.260 [2024-11-20 05:33:09.648248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.260 [2024-11-20 05:33:09.648297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:55.260 [2024-11-20 05:33:09.663327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee8088 00:22:55.260 [2024-11-20 05:33:09.665647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.260 [2024-11-20 05:33:09.665710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:55.260 [2024-11-20 05:33:09.680235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee88f8 00:22:55.260 [2024-11-20 05:33:09.682455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.260 [2024-11-20 05:33:09.682495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:55.260 [2024-11-20 05:33:09.697125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee9168 00:22:55.260 [2024-11-20 05:33:09.699384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.260 [2024-11-20 05:33:09.699424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:55.260 [2024-11-20 05:33:09.714213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ee99d8 00:22:55.261 [2024-11-20 05:33:09.716569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.261 [2024-11-20 05:33:09.716614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:55.261 [2024-11-20 05:33:09.731521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016eea248 00:22:55.261 [2024-11-20 05:33:09.733736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.261 [2024-11-20 05:33:09.733780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.261 [2024-11-20 05:33:09.748554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016eeaab8 00:22:55.261 [2024-11-20 05:33:09.750738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.261 [2024-11-20 05:33:09.750777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:55.261 [2024-11-20 05:33:09.766327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016eeb328 00:22:55.261 [2024-11-20 05:33:09.768534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.261 [2024-11-20 05:33:09.768598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:55.519 [2024-11-20 05:33:09.784250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016eebb98 00:22:55.519 [2024-11-20 05:33:09.786372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.519 [2024-11-20 05:33:09.786413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:55.519 [2024-11-20 05:33:09.801222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016eec408 00:22:55.519 [2024-11-20 05:33:09.803314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.519 [2024-11-20 05:33:09.803352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:55.519 [2024-11-20 05:33:09.818065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016eecc78 00:22:55.519 [2024-11-20 05:33:09.820150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.519 [2024-11-20 05:33:09.820188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:55.519 [2024-11-20 05:33:09.834867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016eed4e8 00:22:55.519 [2024-11-20 05:33:09.836928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.519 [2024-11-20 05:33:09.836967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:55.519 [2024-11-20 05:33:09.851799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016eedd58 00:22:55.519 [2024-11-20 05:33:09.853852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.519 [2024-11-20 05:33:09.853895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:55.519 [2024-11-20 05:33:09.868677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016eee5c8 00:22:55.519 [2024-11-20 05:33:09.870690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.519 [2024-11-20 05:33:09.870725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:55.519 [2024-11-20 05:33:09.885917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016eeee38 00:22:55.519 [2024-11-20 05:33:09.887952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.519 [2024-11-20 05:33:09.887990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:55.519 [2024-11-20 05:33:09.902873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016eef6a8 00:22:55.519 [2024-11-20 05:33:09.904869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.519 [2024-11-20 05:33:09.904921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:55.519 [2024-11-20 05:33:09.919949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016eeff18 00:22:55.519 [2024-11-20 05:33:09.921894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.519 [2024-11-20 05:33:09.921943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:55.519 [2024-11-20 05:33:09.937142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ef0788 00:22:55.519 [2024-11-20 05:33:09.939360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.519 [2024-11-20 05:33:09.939399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:55.519 [2024-11-20 05:33:09.954464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ef0ff8 00:22:55.519 [2024-11-20 05:33:09.956395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.519 [2024-11-20 05:33:09.956433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:55.519 [2024-11-20 05:33:09.971325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ef1868 00:22:55.519 [2024-11-20 05:33:09.973245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.519 [2024-11-20 05:33:09.973281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:55.519 [2024-11-20 05:33:09.988116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ef20d8 00:22:55.519 [2024-11-20 05:33:09.989946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.519 [2024-11-20 05:33:09.989981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:55.519 [2024-11-20 05:33:10.005581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ef2948 00:22:55.519 [2024-11-20 05:33:10.007530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.519 [2024-11-20 05:33:10.007572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.519 [2024-11-20 05:33:10.023587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ef31b8 00:22:55.519 [2024-11-20 05:33:10.025453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.520 [2024-11-20 05:33:10.025496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:55.778 [2024-11-20 05:33:10.041462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ef3a28 00:22:55.778 [2024-11-20 05:33:10.043302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.778 [2024-11-20 05:33:10.043342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:55.778 14612.00 IOPS, 57.08 MiB/s [2024-11-20T05:33:10.291Z] [2024-11-20 05:33:10.059214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f965b0) with pdu=0x200016ef4298 00:22:55.778 [2024-11-20 05:33:10.061013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.778 [2024-11-20 05:33:10.061050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:55.778 00:22:55.778 Latency(us) 00:22:55.778 [2024-11-20T05:33:10.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.778 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:55.778 nvme0n1 : 2.01 14629.51 57.15 0.00 0.00 8741.67 2368.23 34317.03 00:22:55.778 [2024-11-20T05:33:10.291Z] =================================================================================================================== 00:22:55.778 [2024-11-20T05:33:10.291Z] Total : 14629.51 57.15 0.00 0.00 8741.67 2368.23 34317.03 00:22:55.778 { 00:22:55.778 "results": [ 00:22:55.778 { 00:22:55.778 "job": "nvme0n1", 00:22:55.778 "core_mask": "0x2", 00:22:55.778 "workload": "randwrite", 00:22:55.778 "status": "finished", 00:22:55.778 "queue_depth": 128, 00:22:55.778 "io_size": 4096, 00:22:55.778 "runtime": 2.006356, 00:22:55.778 "iops": 14629.507425402073, 00:22:55.778 "mibps": 57.14651338047685, 00:22:55.778 "io_failed": 0, 00:22:55.778 "io_timeout": 0, 00:22:55.778 "avg_latency_us": 8741.672005748409, 00:22:55.778 "min_latency_us": 2368.232727272727, 00:22:55.778 "max_latency_us": 34317.03272727273 00:22:55.778 } 00:22:55.778 ], 00:22:55.778 "core_count": 1 00:22:55.778 } 00:22:55.778 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:55.778 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:55.778 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:55.778 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:55.778 | .driver_specific 00:22:55.778 | .nvme_error 00:22:55.778 | .status_code 00:22:55.778 | .command_transient_transport_error' 00:22:56.037 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 115 > 0 )) 00:22:56.037 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80877 00:22:56.037 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 80877 ']' 00:22:56.037 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 80877 00:22:56.037 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:22:56.037 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:56.037 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80877 00:22:56.037 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:56.037 killing process with pid 80877 00:22:56.037 Received shutdown signal, test time was about 2.000000 seconds 00:22:56.037 00:22:56.037 Latency(us) 00:22:56.037 [2024-11-20T05:33:10.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.037 [2024-11-20T05:33:10.550Z] =================================================================================================================== 00:22:56.037 [2024-11-20T05:33:10.550Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:56.037 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:56.037 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80877' 00:22:56.037 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 80877 00:22:56.037 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 80877 00:22:56.296 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:22:56.296 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:56.296 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:22:56.296 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:22:56.296 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:22:56.296 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80930 00:22:56.296 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:22:56.296 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80930 /var/tmp/bperf.sock 00:22:56.296 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 80930 ']' 00:22:56.296 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:56.296 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:56.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:56.296 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:56.296 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:56.296 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:56.296 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:56.296 Zero copy mechanism will not be used. 00:22:56.296 [2024-11-20 05:33:10.660262] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:22:56.296 [2024-11-20 05:33:10.660352] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80930 ] 00:22:56.296 [2024-11-20 05:33:10.806848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.554 [2024-11-20 05:33:10.846606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.554 [2024-11-20 05:33:10.879846] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:56.554 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:56.554 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:22:56.554 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:56.554 05:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:56.812 05:33:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:56.812 05:33:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.812 05:33:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:56.812 05:33:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.812 05:33:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:56.812 05:33:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:57.379 nvme0n1 00:22:57.379 05:33:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:57.379 05:33:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.379 05:33:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:57.379 05:33:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.379 05:33:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:57.380 05:33:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:57.380 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:57.380 Zero copy mechanism will not be used. 00:22:57.380 Running I/O for 2 seconds... 00:22:57.380 [2024-11-20 05:33:11.815662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.380 [2024-11-20 05:33:11.815784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.380 [2024-11-20 05:33:11.815816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.380 [2024-11-20 05:33:11.820935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.380 [2024-11-20 05:33:11.821026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.380 [2024-11-20 05:33:11.821052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.380 [2024-11-20 05:33:11.826001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.380 [2024-11-20 05:33:11.826128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.380 [2024-11-20 05:33:11.826160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.380 [2024-11-20 05:33:11.831072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.380 [2024-11-20 05:33:11.831227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.380 [2024-11-20 05:33:11.831260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.380 [2024-11-20 05:33:11.836291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.380 [2024-11-20 05:33:11.836441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.380 [2024-11-20 05:33:11.836467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.380 [2024-11-20 05:33:11.841303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.380 [2024-11-20 05:33:11.841386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.380 [2024-11-20 05:33:11.841413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.380 [2024-11-20 05:33:11.846302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.380 [2024-11-20 05:33:11.846387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.380 [2024-11-20 05:33:11.846411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.380 [2024-11-20 05:33:11.851357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.380 [2024-11-20 05:33:11.851441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.380 [2024-11-20 05:33:11.851466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.380 [2024-11-20 05:33:11.856520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.380 [2024-11-20 05:33:11.856614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.380 [2024-11-20 05:33:11.856639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.380 [2024-11-20 05:33:11.861631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.380 [2024-11-20 05:33:11.861726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.380 [2024-11-20 05:33:11.861758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.380 [2024-11-20 05:33:11.866779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.380 [2024-11-20 05:33:11.866927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.380 [2024-11-20 05:33:11.866957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.380 [2024-11-20 05:33:11.871922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.380 [2024-11-20 05:33:11.872015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.380 [2024-11-20 05:33:11.872041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.380 [2024-11-20 05:33:11.877026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.380 [2024-11-20 05:33:11.877126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.380 [2024-11-20 05:33:11.877150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.380 [2024-11-20 05:33:11.882109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.380 [2024-11-20 05:33:11.882196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.380 [2024-11-20 05:33:11.882221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.380 [2024-11-20 05:33:11.887154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.380 [2024-11-20 05:33:11.887249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.380 [2024-11-20 05:33:11.887273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.641 [2024-11-20 05:33:11.892251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.641 [2024-11-20 05:33:11.892343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-11-20 05:33:11.892367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.641 [2024-11-20 05:33:11.897309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.641 [2024-11-20 05:33:11.897405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-11-20 05:33:11.897430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.641 [2024-11-20 05:33:11.902371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.641 [2024-11-20 05:33:11.902474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-11-20 05:33:11.902498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.641 [2024-11-20 05:33:11.907522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.641 [2024-11-20 05:33:11.907610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-11-20 05:33:11.907653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.641 [2024-11-20 05:33:11.911815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.641 [2024-11-20 05:33:11.911948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-11-20 05:33:11.911983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.641 [2024-11-20 05:33:11.916771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.641 [2024-11-20 05:33:11.916840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-11-20 05:33:11.916865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.641 [2024-11-20 05:33:11.921832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.641 [2024-11-20 05:33:11.921927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-11-20 05:33:11.921952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.641 [2024-11-20 05:33:11.926922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.641 [2024-11-20 05:33:11.926990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-11-20 05:33:11.927014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.641 [2024-11-20 05:33:11.931950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.641 [2024-11-20 05:33:11.932016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-11-20 05:33:11.932040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.641 [2024-11-20 05:33:11.937014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.641 [2024-11-20 05:33:11.937083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-11-20 05:33:11.937107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.641 [2024-11-20 05:33:11.942069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.641 [2024-11-20 05:33:11.942136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-11-20 05:33:11.942161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.641 [2024-11-20 05:33:11.947132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.641 [2024-11-20 05:33:11.947200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-11-20 05:33:11.947223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.641 [2024-11-20 05:33:11.952200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.641 [2024-11-20 05:33:11.952275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-11-20 05:33:11.952301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.641 [2024-11-20 05:33:11.957335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.641 [2024-11-20 05:33:11.957416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-11-20 05:33:11.957441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.641 [2024-11-20 05:33:11.962406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.641 [2024-11-20 05:33:11.962475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-11-20 05:33:11.962501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.641 [2024-11-20 05:33:11.969187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.641 [2024-11-20 05:33:11.969280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-11-20 05:33:11.969304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.642 [2024-11-20 05:33:11.975092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.642 [2024-11-20 05:33:11.975176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-11-20 05:33:11.975201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.642 [2024-11-20 05:33:11.980220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.642 [2024-11-20 05:33:11.980302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-11-20 05:33:11.980327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.642 [2024-11-20 05:33:11.985403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.642 [2024-11-20 05:33:11.985476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-11-20 05:33:11.985500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.642 [2024-11-20 05:33:11.990549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.642 [2024-11-20 05:33:11.990629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-11-20 05:33:11.990654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.642 [2024-11-20 05:33:11.995575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.642 [2024-11-20 05:33:11.995651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-11-20 05:33:11.995676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.642 [2024-11-20 05:33:12.000677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.642 [2024-11-20 05:33:12.000760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-11-20 05:33:12.000784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.642 [2024-11-20 05:33:12.005788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.642 [2024-11-20 05:33:12.005856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-11-20 05:33:12.005881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.642 [2024-11-20 05:33:12.010849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.642 [2024-11-20 05:33:12.010940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-11-20 05:33:12.010965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.642 [2024-11-20 05:33:12.015981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.642 [2024-11-20 05:33:12.016063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-11-20 05:33:12.016095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.642 [2024-11-20 05:33:12.021092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.642 [2024-11-20 05:33:12.021162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-11-20 05:33:12.021188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.642 [2024-11-20 05:33:12.026198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.642 [2024-11-20 05:33:12.026270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-11-20 05:33:12.026294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.642 [2024-11-20 05:33:12.031274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.642 [2024-11-20 05:33:12.031358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-11-20 05:33:12.031389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.642 [2024-11-20 05:33:12.036434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.642 [2024-11-20 05:33:12.036513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-11-20 05:33:12.036538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.642 [2024-11-20 05:33:12.041560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.642 [2024-11-20 05:33:12.041640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-11-20 05:33:12.041665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.642 [2024-11-20 05:33:12.046755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.642 [2024-11-20 05:33:12.046833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-11-20 05:33:12.046859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.642 [2024-11-20 05:33:12.051764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.642 [2024-11-20 05:33:12.051853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-11-20 05:33:12.051892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.642 [2024-11-20 05:33:12.056927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.642 [2024-11-20 05:33:12.057022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-11-20 05:33:12.057054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.642 [2024-11-20 05:33:12.062062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.642 [2024-11-20 05:33:12.062139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-11-20 05:33:12.062165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.642 [2024-11-20 05:33:12.067176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.642 [2024-11-20 05:33:12.067286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-11-20 05:33:12.067310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.642 [2024-11-20 05:33:12.072224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.642 [2024-11-20 05:33:12.072295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-11-20 05:33:12.072320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.642 [2024-11-20 05:33:12.077307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.642 [2024-11-20 05:33:12.077399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-11-20 05:33:12.077431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.642 [2024-11-20 05:33:12.082387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.642 [2024-11-20 05:33:12.082462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-11-20 05:33:12.082487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.642 [2024-11-20 05:33:12.087513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.642 [2024-11-20 05:33:12.087582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-11-20 05:33:12.087606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.642 [2024-11-20 05:33:12.092567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.642 [2024-11-20 05:33:12.092644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-11-20 05:33:12.092669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.642 [2024-11-20 05:33:12.097721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.642 [2024-11-20 05:33:12.097805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-11-20 05:33:12.097836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.642 [2024-11-20 05:33:12.102796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.642 [2024-11-20 05:33:12.102888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-11-20 05:33:12.102931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.642 [2024-11-20 05:33:12.107835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.642 [2024-11-20 05:33:12.107951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-11-20 05:33:12.107976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.643 [2024-11-20 05:33:12.112891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.643 [2024-11-20 05:33:12.112976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-11-20 05:33:12.113001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.643 [2024-11-20 05:33:12.117952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.643 [2024-11-20 05:33:12.118034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-11-20 05:33:12.118058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.643 [2024-11-20 05:33:12.123018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.643 [2024-11-20 05:33:12.123086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-11-20 05:33:12.123111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.643 [2024-11-20 05:33:12.128155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.643 [2024-11-20 05:33:12.128225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-11-20 05:33:12.128256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.643 [2024-11-20 05:33:12.133220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.643 [2024-11-20 05:33:12.133296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-11-20 05:33:12.133320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.643 [2024-11-20 05:33:12.138388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.643 [2024-11-20 05:33:12.138457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-11-20 05:33:12.138481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.643 [2024-11-20 05:33:12.143427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.643 [2024-11-20 05:33:12.143494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-11-20 05:33:12.143519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.643 [2024-11-20 05:33:12.148590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.643 [2024-11-20 05:33:12.148658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-11-20 05:33:12.148683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.901 [2024-11-20 05:33:12.153681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.901 [2024-11-20 05:33:12.153750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.901 [2024-11-20 05:33:12.153775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.901 [2024-11-20 05:33:12.158734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.901 [2024-11-20 05:33:12.158807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.901 [2024-11-20 05:33:12.158831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.901 [2024-11-20 05:33:12.163867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.901 [2024-11-20 05:33:12.163949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.901 [2024-11-20 05:33:12.163975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.901 [2024-11-20 05:33:12.169013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.901 [2024-11-20 05:33:12.169078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.901 [2024-11-20 05:33:12.169102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.901 [2024-11-20 05:33:12.174375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.901 [2024-11-20 05:33:12.174469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.901 [2024-11-20 05:33:12.174500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.901 [2024-11-20 05:33:12.179763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.179850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.179883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.185009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.185110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.185134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.190215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.190285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.190310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.195424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.195513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.195537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.200629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.200726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.200757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.205804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.205877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.205915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.210971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.211056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.211086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.216129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.216198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.216229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.221253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.221330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.221355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.226343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.226412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.226437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.231402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.231492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.231516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.236484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.236577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.236609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.241594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.241670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.241694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.246627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.246700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.246725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.251728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.251797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.251832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.256843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.256931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.256956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.261970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.262047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.262072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.267027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.267096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.267120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.272147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.272247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.272283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.277241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.277310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.277335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.282344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.282423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.282447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.287414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.287481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.287506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.292545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.292622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.292646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.297663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.297737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.297761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.302736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.302827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.302851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.307851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.307959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.307984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.312923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.312991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.313016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.317987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.318055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.318079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.323163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.323241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.323265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.328249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.328341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.328372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.333538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.333614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.333640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.339291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.339371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.339395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.344508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.344601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.344632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.349666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.349750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.349777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.356046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.356122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.356152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.361747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.361823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.361852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.367073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.367172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.367199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.372328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.372399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.372424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.377585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.377652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.377678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.382796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.382879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.382920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.388008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.388083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.388108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.393094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.393161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.393187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.398071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.398342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.398385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.403249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.403352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.403387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.902 [2024-11-20 05:33:12.408175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:57.902 [2024-11-20 05:33:12.408268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.902 [2024-11-20 05:33:12.408300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.162 [2024-11-20 05:33:12.413272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.162 [2024-11-20 05:33:12.413379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.162 [2024-11-20 05:33:12.413404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.162 [2024-11-20 05:33:12.418388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.162 [2024-11-20 05:33:12.418457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.162 [2024-11-20 05:33:12.418481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.162 [2024-11-20 05:33:12.423433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.162 [2024-11-20 05:33:12.423502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.162 [2024-11-20 05:33:12.423526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.162 [2024-11-20 05:33:12.428522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.163 [2024-11-20 05:33:12.428591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.163 [2024-11-20 05:33:12.428615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.163 [2024-11-20 05:33:12.433615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.163 [2024-11-20 05:33:12.433683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.163 [2024-11-20 05:33:12.433707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.163 [2024-11-20 05:33:12.438643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.163 [2024-11-20 05:33:12.438711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.163 [2024-11-20 05:33:12.438735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.163 [2024-11-20 05:33:12.443734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.163 [2024-11-20 05:33:12.443803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.163 [2024-11-20 05:33:12.443841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.163 [2024-11-20 05:33:12.448802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.163 [2024-11-20 05:33:12.448916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.163 [2024-11-20 05:33:12.448941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.163 [2024-11-20 05:33:12.453957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.163 [2024-11-20 05:33:12.454068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.163 [2024-11-20 05:33:12.454091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.163 [2024-11-20 05:33:12.459077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.163 [2024-11-20 05:33:12.459144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.163 [2024-11-20 05:33:12.459168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.163 [2024-11-20 05:33:12.464194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.163 [2024-11-20 05:33:12.464289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.163 [2024-11-20 05:33:12.464312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.163 [2024-11-20 05:33:12.469357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.163 [2024-11-20 05:33:12.469436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.163 [2024-11-20 05:33:12.469469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.163 [2024-11-20 05:33:12.474477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.163 [2024-11-20 05:33:12.474569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.163 [2024-11-20 05:33:12.474593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.163 [2024-11-20 05:33:12.479541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.163 [2024-11-20 05:33:12.479634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.163 [2024-11-20 05:33:12.479658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.163 [2024-11-20 05:33:12.484630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.163 [2024-11-20 05:33:12.484699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.163 [2024-11-20 05:33:12.484722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.163 [2024-11-20 05:33:12.489733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.163 [2024-11-20 05:33:12.489823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.163 [2024-11-20 05:33:12.489847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.163 [2024-11-20 05:33:12.494817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.163 [2024-11-20 05:33:12.494886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.163 [2024-11-20 05:33:12.494924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.163 [2024-11-20 05:33:12.499893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.163 [2024-11-20 05:33:12.499970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.163 [2024-11-20 05:33:12.499994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.163 [2024-11-20 05:33:12.505012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.163 [2024-11-20 05:33:12.505079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.163 [2024-11-20 05:33:12.505103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.163 [2024-11-20 05:33:12.510058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.163 [2024-11-20 05:33:12.510126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.163 [2024-11-20 05:33:12.510149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.163 [2024-11-20 05:33:12.515223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.163 [2024-11-20 05:33:12.515299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.163 [2024-11-20 05:33:12.515323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.163 [2024-11-20 05:33:12.520310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.163 [2024-11-20 05:33:12.520377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.163 [2024-11-20 05:33:12.520401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.163 [2024-11-20 05:33:12.525377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.163 [2024-11-20 05:33:12.525446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.163 [2024-11-20 05:33:12.525470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.163 [2024-11-20 05:33:12.530443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.163 [2024-11-20 05:33:12.530517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.163 [2024-11-20 05:33:12.530541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.163 [2024-11-20 05:33:12.535583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.163 [2024-11-20 05:33:12.535672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.163 [2024-11-20 05:33:12.535696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.163 [2024-11-20 05:33:12.540688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.163 [2024-11-20 05:33:12.540757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.163 [2024-11-20 05:33:12.540781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.163 [2024-11-20 05:33:12.545765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.163 [2024-11-20 05:33:12.545833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.163 [2024-11-20 05:33:12.545857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.163 [2024-11-20 05:33:12.550816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.163 [2024-11-20 05:33:12.550885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.163 [2024-11-20 05:33:12.550924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.163 [2024-11-20 05:33:12.555950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.163 [2024-11-20 05:33:12.556020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.163 [2024-11-20 05:33:12.556044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.163 [2024-11-20 05:33:12.561026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.163 [2024-11-20 05:33:12.561095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.163 [2024-11-20 05:33:12.561119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.164 [2024-11-20 05:33:12.566185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.164 [2024-11-20 05:33:12.566254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.164 [2024-11-20 05:33:12.566278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.164 [2024-11-20 05:33:12.571283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.164 [2024-11-20 05:33:12.571365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.164 [2024-11-20 05:33:12.571388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.164 [2024-11-20 05:33:12.576379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.164 [2024-11-20 05:33:12.576447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.164 [2024-11-20 05:33:12.576472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.164 [2024-11-20 05:33:12.581429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.164 [2024-11-20 05:33:12.581497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.164 [2024-11-20 05:33:12.581522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.164 [2024-11-20 05:33:12.586506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.164 [2024-11-20 05:33:12.586575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.164 [2024-11-20 05:33:12.586599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.164 [2024-11-20 05:33:12.591570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.164 [2024-11-20 05:33:12.591638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.164 [2024-11-20 05:33:12.591662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.164 [2024-11-20 05:33:12.596610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.164 [2024-11-20 05:33:12.596702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.164 [2024-11-20 05:33:12.596726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.164 [2024-11-20 05:33:12.601700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.164 [2024-11-20 05:33:12.601766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.164 [2024-11-20 05:33:12.601789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.164 [2024-11-20 05:33:12.606771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.164 [2024-11-20 05:33:12.606840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.164 [2024-11-20 05:33:12.606864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.164 [2024-11-20 05:33:12.611860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.164 [2024-11-20 05:33:12.611960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.164 [2024-11-20 05:33:12.611985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.164 [2024-11-20 05:33:12.616930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.164 [2024-11-20 05:33:12.617001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.164 [2024-11-20 05:33:12.617025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.164 [2024-11-20 05:33:12.622020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.164 [2024-11-20 05:33:12.622101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.164 [2024-11-20 05:33:12.622125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.164 [2024-11-20 05:33:12.627207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.164 [2024-11-20 05:33:12.627283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.164 [2024-11-20 05:33:12.627307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.164 [2024-11-20 05:33:12.632299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.164 [2024-11-20 05:33:12.632369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.164 [2024-11-20 05:33:12.632393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.164 [2024-11-20 05:33:12.637369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.164 [2024-11-20 05:33:12.637438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.164 [2024-11-20 05:33:12.637462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.164 [2024-11-20 05:33:12.642418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.164 [2024-11-20 05:33:12.642483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.164 [2024-11-20 05:33:12.642506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.164 [2024-11-20 05:33:12.647475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.164 [2024-11-20 05:33:12.647544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.164 [2024-11-20 05:33:12.647568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.164 [2024-11-20 05:33:12.652622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.164 [2024-11-20 05:33:12.652714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.164 [2024-11-20 05:33:12.652738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.164 [2024-11-20 05:33:12.657693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.164 [2024-11-20 05:33:12.657762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.164 [2024-11-20 05:33:12.657786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.164 [2024-11-20 05:33:12.662769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.164 [2024-11-20 05:33:12.662837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.164 [2024-11-20 05:33:12.662861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.164 [2024-11-20 05:33:12.667894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.164 [2024-11-20 05:33:12.668021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.164 [2024-11-20 05:33:12.668045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.528 [2024-11-20 05:33:12.673017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.528 [2024-11-20 05:33:12.673120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.528 [2024-11-20 05:33:12.673145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.528 [2024-11-20 05:33:12.678111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.528 [2024-11-20 05:33:12.678183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.528 [2024-11-20 05:33:12.678207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.528 [2024-11-20 05:33:12.683159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.528 [2024-11-20 05:33:12.683228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.528 [2024-11-20 05:33:12.683251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.528 [2024-11-20 05:33:12.688298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.528 [2024-11-20 05:33:12.688363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.528 [2024-11-20 05:33:12.688387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.528 [2024-11-20 05:33:12.693390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.528 [2024-11-20 05:33:12.693460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.528 [2024-11-20 05:33:12.693484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.529 [2024-11-20 05:33:12.698472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.529 [2024-11-20 05:33:12.698541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.529 [2024-11-20 05:33:12.698564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.529 [2024-11-20 05:33:12.704464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.529 [2024-11-20 05:33:12.704567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.529 [2024-11-20 05:33:12.704592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.529 [2024-11-20 05:33:12.711761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.529 [2024-11-20 05:33:12.711883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.529 [2024-11-20 05:33:12.711922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.529 [2024-11-20 05:33:12.719157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.529 [2024-11-20 05:33:12.719259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.529 [2024-11-20 05:33:12.719282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.529 [2024-11-20 05:33:12.726505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.529 [2024-11-20 05:33:12.726603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.529 [2024-11-20 05:33:12.726627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.529 [2024-11-20 05:33:12.733898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.529 [2024-11-20 05:33:12.734048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.529 [2024-11-20 05:33:12.734072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.529 [2024-11-20 05:33:12.741436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.529 [2024-11-20 05:33:12.741547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.529 [2024-11-20 05:33:12.741571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.529 [2024-11-20 05:33:12.748747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.529 [2024-11-20 05:33:12.748862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.529 [2024-11-20 05:33:12.748886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.529 [2024-11-20 05:33:12.755900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.529 [2024-11-20 05:33:12.756014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.529 [2024-11-20 05:33:12.756038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.529 [2024-11-20 05:33:12.763035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.529 [2024-11-20 05:33:12.763134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.529 [2024-11-20 05:33:12.763158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.529 [2024-11-20 05:33:12.770269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.529 [2024-11-20 05:33:12.770371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.529 [2024-11-20 05:33:12.770395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.529 [2024-11-20 05:33:12.777470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.529 [2024-11-20 05:33:12.777594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.529 [2024-11-20 05:33:12.777618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.529 [2024-11-20 05:33:12.784653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.529 [2024-11-20 05:33:12.784772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.529 [2024-11-20 05:33:12.784797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.529 [2024-11-20 05:33:12.791777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.529 [2024-11-20 05:33:12.791892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.529 [2024-11-20 05:33:12.791938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.529 [2024-11-20 05:33:12.798830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.529 [2024-11-20 05:33:12.798952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.529 [2024-11-20 05:33:12.798977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.529 [2024-11-20 05:33:12.805846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.529 [2024-11-20 05:33:12.805990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.529 [2024-11-20 05:33:12.806015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.529 [2024-11-20 05:33:12.812889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.529 [2024-11-20 05:33:12.813010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.529 [2024-11-20 05:33:12.813034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.529 5832.00 IOPS, 729.00 MiB/s [2024-11-20T05:33:13.042Z] [2024-11-20 05:33:12.820765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.529 [2024-11-20 05:33:12.820863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.529 [2024-11-20 05:33:12.820888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.529 [2024-11-20 05:33:12.827740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.529 [2024-11-20 05:33:12.827887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.529 [2024-11-20 05:33:12.827927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.529 [2024-11-20 05:33:12.835206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.529 [2024-11-20 05:33:12.835309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.529 [2024-11-20 05:33:12.835333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.529 [2024-11-20 05:33:12.842590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.529 [2024-11-20 05:33:12.842690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.529 [2024-11-20 05:33:12.842714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.529 [2024-11-20 05:33:12.849365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.529 [2024-11-20 05:33:12.849460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.529 [2024-11-20 05:33:12.849484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.529 [2024-11-20 05:33:12.854548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.529 [2024-11-20 05:33:12.854647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.529 [2024-11-20 05:33:12.854671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.529 [2024-11-20 05:33:12.859651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.529 [2024-11-20 05:33:12.859742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.529 [2024-11-20 05:33:12.859766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.529 [2024-11-20 05:33:12.864781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.529 [2024-11-20 05:33:12.864873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.529 [2024-11-20 05:33:12.864898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.529 [2024-11-20 05:33:12.869830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.529 [2024-11-20 05:33:12.869898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.529 [2024-11-20 05:33:12.869939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.529 [2024-11-20 05:33:12.874870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.529 [2024-11-20 05:33:12.874951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.530 [2024-11-20 05:33:12.874976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.530 [2024-11-20 05:33:12.879944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.530 [2024-11-20 05:33:12.880012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.530 [2024-11-20 05:33:12.880036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.530 [2024-11-20 05:33:12.885064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.530 [2024-11-20 05:33:12.885134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.530 [2024-11-20 05:33:12.885165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.530 [2024-11-20 05:33:12.890072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.530 [2024-11-20 05:33:12.890143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.530 [2024-11-20 05:33:12.890169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.530 [2024-11-20 05:33:12.895145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.530 [2024-11-20 05:33:12.895209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.530 [2024-11-20 05:33:12.895233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.530 [2024-11-20 05:33:12.900363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.530 [2024-11-20 05:33:12.900436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.530 [2024-11-20 05:33:12.900460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.530 [2024-11-20 05:33:12.905702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.530 [2024-11-20 05:33:12.905774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.530 [2024-11-20 05:33:12.905798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.530 [2024-11-20 05:33:12.911237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.530 [2024-11-20 05:33:12.911312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.530 [2024-11-20 05:33:12.911337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.530 [2024-11-20 05:33:12.916324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.530 [2024-11-20 05:33:12.916398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.530 [2024-11-20 05:33:12.916422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.530 [2024-11-20 05:33:12.921498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.530 [2024-11-20 05:33:12.921589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.530 [2024-11-20 05:33:12.921613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.530 [2024-11-20 05:33:12.926571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.530 [2024-11-20 05:33:12.926640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.530 [2024-11-20 05:33:12.926663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.530 [2024-11-20 05:33:12.931676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.530 [2024-11-20 05:33:12.931751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.530 [2024-11-20 05:33:12.931775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.530 [2024-11-20 05:33:12.936945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.530 [2024-11-20 05:33:12.937012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.530 [2024-11-20 05:33:12.937035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.530 [2024-11-20 05:33:12.942198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.530 [2024-11-20 05:33:12.942295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.530 [2024-11-20 05:33:12.942319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.530 [2024-11-20 05:33:12.947303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.530 [2024-11-20 05:33:12.947387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.530 [2024-11-20 05:33:12.947412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.530 [2024-11-20 05:33:12.952387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.530 [2024-11-20 05:33:12.952479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.530 [2024-11-20 05:33:12.952503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.530 [2024-11-20 05:33:12.958232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.530 [2024-11-20 05:33:12.958301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.530 [2024-11-20 05:33:12.958325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.530 [2024-11-20 05:33:12.963639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.530 [2024-11-20 05:33:12.963718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.530 [2024-11-20 05:33:12.963741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.530 [2024-11-20 05:33:12.968996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.530 [2024-11-20 05:33:12.969087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.530 [2024-11-20 05:33:12.969110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.530 [2024-11-20 05:33:12.974178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.530 [2024-11-20 05:33:12.974249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.530 [2024-11-20 05:33:12.974272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.530 [2024-11-20 05:33:12.979348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.530 [2024-11-20 05:33:12.979415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.530 [2024-11-20 05:33:12.979438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.530 [2024-11-20 05:33:12.984438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.530 [2024-11-20 05:33:12.984506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.530 [2024-11-20 05:33:12.984530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.530 [2024-11-20 05:33:12.989577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.530 [2024-11-20 05:33:12.989644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.530 [2024-11-20 05:33:12.989668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.530 [2024-11-20 05:33:12.994642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.530 [2024-11-20 05:33:12.994709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.530 [2024-11-20 05:33:12.994732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.530 [2024-11-20 05:33:12.999779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.530 [2024-11-20 05:33:12.999885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.530 [2024-11-20 05:33:12.999924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.530 [2024-11-20 05:33:13.004854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.530 [2024-11-20 05:33:13.004962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.530 [2024-11-20 05:33:13.004986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.530 [2024-11-20 05:33:13.009921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.530 [2024-11-20 05:33:13.009989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.530 [2024-11-20 05:33:13.010013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.530 [2024-11-20 05:33:13.015066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.530 [2024-11-20 05:33:13.015135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.531 [2024-11-20 05:33:13.015159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.531 [2024-11-20 05:33:13.020194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.531 [2024-11-20 05:33:13.020274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.531 [2024-11-20 05:33:13.020298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.531 [2024-11-20 05:33:13.025263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.531 [2024-11-20 05:33:13.025350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.531 [2024-11-20 05:33:13.025374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.531 [2024-11-20 05:33:13.030367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.531 [2024-11-20 05:33:13.030456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.531 [2024-11-20 05:33:13.030480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.531 [2024-11-20 05:33:13.035455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.531 [2024-11-20 05:33:13.035544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.531 [2024-11-20 05:33:13.035568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.791 [2024-11-20 05:33:13.040588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.791 [2024-11-20 05:33:13.040676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.791 [2024-11-20 05:33:13.040700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.791 [2024-11-20 05:33:13.045591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.791 [2024-11-20 05:33:13.045660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.791 [2024-11-20 05:33:13.045683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.791 [2024-11-20 05:33:13.050737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.791 [2024-11-20 05:33:13.050806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.791 [2024-11-20 05:33:13.050830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.791 [2024-11-20 05:33:13.055775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.791 [2024-11-20 05:33:13.055853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.791 [2024-11-20 05:33:13.055884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.791 [2024-11-20 05:33:13.060928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.791 [2024-11-20 05:33:13.061008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.791 [2024-11-20 05:33:13.061031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.791 [2024-11-20 05:33:13.065958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.791 [2024-11-20 05:33:13.066025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.791 [2024-11-20 05:33:13.066049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.791 [2024-11-20 05:33:13.071048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.791 [2024-11-20 05:33:13.071114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.791 [2024-11-20 05:33:13.071138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.791 [2024-11-20 05:33:13.076602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.791 [2024-11-20 05:33:13.076670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.791 [2024-11-20 05:33:13.076694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.791 [2024-11-20 05:33:13.081831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.791 [2024-11-20 05:33:13.081921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.791 [2024-11-20 05:33:13.081960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.791 [2024-11-20 05:33:13.087057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.791 [2024-11-20 05:33:13.087130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.791 [2024-11-20 05:33:13.087158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.791 [2024-11-20 05:33:13.092166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.791 [2024-11-20 05:33:13.092239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.791 [2024-11-20 05:33:13.092263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.791 [2024-11-20 05:33:13.097777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.791 [2024-11-20 05:33:13.097846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.791 [2024-11-20 05:33:13.097869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.791 [2024-11-20 05:33:13.103724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.791 [2024-11-20 05:33:13.103798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.791 [2024-11-20 05:33:13.103834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.791 [2024-11-20 05:33:13.109101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.791 [2024-11-20 05:33:13.109189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.791 [2024-11-20 05:33:13.109212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.791 [2024-11-20 05:33:13.114401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.791 [2024-11-20 05:33:13.114473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.791 [2024-11-20 05:33:13.114497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.791 [2024-11-20 05:33:13.119599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.791 [2024-11-20 05:33:13.119669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.791 [2024-11-20 05:33:13.119692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.791 [2024-11-20 05:33:13.124843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.791 [2024-11-20 05:33:13.124956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.791 [2024-11-20 05:33:13.124980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.791 [2024-11-20 05:33:13.130891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.791 [2024-11-20 05:33:13.130997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.791 [2024-11-20 05:33:13.131031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.791 [2024-11-20 05:33:13.136061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.791 [2024-11-20 05:33:13.136134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.791 [2024-11-20 05:33:13.136158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.792 [2024-11-20 05:33:13.141334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.792 [2024-11-20 05:33:13.141409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.792 [2024-11-20 05:33:13.141432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.792 [2024-11-20 05:33:13.147260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.792 [2024-11-20 05:33:13.147358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.792 [2024-11-20 05:33:13.147389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.792 [2024-11-20 05:33:13.152704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.792 [2024-11-20 05:33:13.152821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.792 [2024-11-20 05:33:13.152851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.792 [2024-11-20 05:33:13.158003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.792 [2024-11-20 05:33:13.158111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.792 [2024-11-20 05:33:13.158141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.792 [2024-11-20 05:33:13.163300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.792 [2024-11-20 05:33:13.163418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.792 [2024-11-20 05:33:13.163449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.792 [2024-11-20 05:33:13.168987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.792 [2024-11-20 05:33:13.169084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.792 [2024-11-20 05:33:13.169113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.792 [2024-11-20 05:33:13.174277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.792 [2024-11-20 05:33:13.174373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.792 [2024-11-20 05:33:13.174403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.792 [2024-11-20 05:33:13.179921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.792 [2024-11-20 05:33:13.180025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.792 [2024-11-20 05:33:13.180055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.792 [2024-11-20 05:33:13.185258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.792 [2024-11-20 05:33:13.185357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.792 [2024-11-20 05:33:13.185389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.792 [2024-11-20 05:33:13.191148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.792 [2024-11-20 05:33:13.191252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.792 [2024-11-20 05:33:13.191283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.792 [2024-11-20 05:33:13.196784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.792 [2024-11-20 05:33:13.196932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.792 [2024-11-20 05:33:13.196964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.792 [2024-11-20 05:33:13.202161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.792 [2024-11-20 05:33:13.202265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.792 [2024-11-20 05:33:13.202296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.792 [2024-11-20 05:33:13.207476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.792 [2024-11-20 05:33:13.207572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.792 [2024-11-20 05:33:13.207602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.792 [2024-11-20 05:33:13.212819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.792 [2024-11-20 05:33:13.212934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.792 [2024-11-20 05:33:13.212966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.792 [2024-11-20 05:33:13.218516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.792 [2024-11-20 05:33:13.218611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.792 [2024-11-20 05:33:13.218640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.792 [2024-11-20 05:33:13.223835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.792 [2024-11-20 05:33:13.223947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.792 [2024-11-20 05:33:13.223978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.792 [2024-11-20 05:33:13.229110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.792 [2024-11-20 05:33:13.229200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.792 [2024-11-20 05:33:13.229226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.792 [2024-11-20 05:33:13.234267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.792 [2024-11-20 05:33:13.234345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.792 [2024-11-20 05:33:13.234371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.792 [2024-11-20 05:33:13.239526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.792 [2024-11-20 05:33:13.239598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.792 [2024-11-20 05:33:13.239622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.792 [2024-11-20 05:33:13.244657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.792 [2024-11-20 05:33:13.244739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.792 [2024-11-20 05:33:13.244763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.792 [2024-11-20 05:33:13.249744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.792 [2024-11-20 05:33:13.249856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.792 [2024-11-20 05:33:13.249885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.792 [2024-11-20 05:33:13.254851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.792 [2024-11-20 05:33:13.254960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.792 [2024-11-20 05:33:13.254991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.792 [2024-11-20 05:33:13.260033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.792 [2024-11-20 05:33:13.260121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.792 [2024-11-20 05:33:13.260151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.792 [2024-11-20 05:33:13.265204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.792 [2024-11-20 05:33:13.265314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.792 [2024-11-20 05:33:13.265343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.792 [2024-11-20 05:33:13.270336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.793 [2024-11-20 05:33:13.270438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.793 [2024-11-20 05:33:13.270469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.793 [2024-11-20 05:33:13.275502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.793 [2024-11-20 05:33:13.275587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.793 [2024-11-20 05:33:13.275616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:58.793 [2024-11-20 05:33:13.280709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.793 [2024-11-20 05:33:13.280816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.793 [2024-11-20 05:33:13.280847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:58.793 [2024-11-20 05:33:13.285925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.793 [2024-11-20 05:33:13.286017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.793 [2024-11-20 05:33:13.286048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:58.793 [2024-11-20 05:33:13.291105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.793 [2024-11-20 05:33:13.291209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.793 [2024-11-20 05:33:13.291240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:58.793 [2024-11-20 05:33:13.296257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:58.793 [2024-11-20 05:33:13.296358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.793 [2024-11-20 05:33:13.296387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.053 [2024-11-20 05:33:13.301400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.053 [2024-11-20 05:33:13.301487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.053 [2024-11-20 05:33:13.301517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.053 [2024-11-20 05:33:13.306528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.053 [2024-11-20 05:33:13.306613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.053 [2024-11-20 05:33:13.306643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.053 [2024-11-20 05:33:13.311696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.053 [2024-11-20 05:33:13.311766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.053 [2024-11-20 05:33:13.311791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.053 [2024-11-20 05:33:13.316856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.053 [2024-11-20 05:33:13.316968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.053 [2024-11-20 05:33:13.316999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.054 [2024-11-20 05:33:13.322029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.054 [2024-11-20 05:33:13.322101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.054 [2024-11-20 05:33:13.322126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.054 [2024-11-20 05:33:13.327127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.054 [2024-11-20 05:33:13.327193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.054 [2024-11-20 05:33:13.327217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.054 [2024-11-20 05:33:13.332270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.054 [2024-11-20 05:33:13.332339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.054 [2024-11-20 05:33:13.332364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.054 [2024-11-20 05:33:13.337364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.054 [2024-11-20 05:33:13.337430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.054 [2024-11-20 05:33:13.337454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.054 [2024-11-20 05:33:13.342470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.054 [2024-11-20 05:33:13.342561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.054 [2024-11-20 05:33:13.342585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.054 [2024-11-20 05:33:13.347579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.054 [2024-11-20 05:33:13.347675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.054 [2024-11-20 05:33:13.347705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.054 [2024-11-20 05:33:13.352858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.054 [2024-11-20 05:33:13.352973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.054 [2024-11-20 05:33:13.353003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.054 [2024-11-20 05:33:13.358002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.054 [2024-11-20 05:33:13.358078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.054 [2024-11-20 05:33:13.358104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.054 [2024-11-20 05:33:13.363428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.054 [2024-11-20 05:33:13.363528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.054 [2024-11-20 05:33:13.363553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.054 [2024-11-20 05:33:13.369372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.054 [2024-11-20 05:33:13.369476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.054 [2024-11-20 05:33:13.369502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.054 [2024-11-20 05:33:13.376503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.054 [2024-11-20 05:33:13.376583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.054 [2024-11-20 05:33:13.376610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.054 [2024-11-20 05:33:13.383088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.054 [2024-11-20 05:33:13.383218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.054 [2024-11-20 05:33:13.383255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.054 [2024-11-20 05:33:13.390305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.054 [2024-11-20 05:33:13.390399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.054 [2024-11-20 05:33:13.390434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.054 [2024-11-20 05:33:13.397202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.054 [2024-11-20 05:33:13.397309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.054 [2024-11-20 05:33:13.397340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.054 [2024-11-20 05:33:13.404129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.054 [2024-11-20 05:33:13.404222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.054 [2024-11-20 05:33:13.404253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.054 [2024-11-20 05:33:13.411260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.054 [2024-11-20 05:33:13.411365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.054 [2024-11-20 05:33:13.411408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.054 [2024-11-20 05:33:13.418049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.054 [2024-11-20 05:33:13.418168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.054 [2024-11-20 05:33:13.418198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.054 [2024-11-20 05:33:13.424861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.054 [2024-11-20 05:33:13.424989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.054 [2024-11-20 05:33:13.425018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.054 [2024-11-20 05:33:13.431715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.054 [2024-11-20 05:33:13.431862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.054 [2024-11-20 05:33:13.431899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.054 [2024-11-20 05:33:13.438211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.054 [2024-11-20 05:33:13.438340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.054 [2024-11-20 05:33:13.438373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.054 [2024-11-20 05:33:13.443519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.054 [2024-11-20 05:33:13.443616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.054 [2024-11-20 05:33:13.443651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.054 [2024-11-20 05:33:13.448739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.054 [2024-11-20 05:33:13.448830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.054 [2024-11-20 05:33:13.448862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.054 [2024-11-20 05:33:13.454094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.054 [2024-11-20 05:33:13.454189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.054 [2024-11-20 05:33:13.454222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.054 [2024-11-20 05:33:13.459312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.054 [2024-11-20 05:33:13.459411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.054 [2024-11-20 05:33:13.459453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.054 [2024-11-20 05:33:13.464557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.054 [2024-11-20 05:33:13.464647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.054 [2024-11-20 05:33:13.464679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.054 [2024-11-20 05:33:13.469866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.054 [2024-11-20 05:33:13.469979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.054 [2024-11-20 05:33:13.470012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.054 [2024-11-20 05:33:13.475143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.054 [2024-11-20 05:33:13.475289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.055 [2024-11-20 05:33:13.475322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.055 [2024-11-20 05:33:13.480423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.055 [2024-11-20 05:33:13.480544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.055 [2024-11-20 05:33:13.480577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.055 [2024-11-20 05:33:13.485723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.055 [2024-11-20 05:33:13.485840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.055 [2024-11-20 05:33:13.485871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.055 [2024-11-20 05:33:13.490980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.055 [2024-11-20 05:33:13.491088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.055 [2024-11-20 05:33:13.491129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.055 [2024-11-20 05:33:13.496309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.055 [2024-11-20 05:33:13.496408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.055 [2024-11-20 05:33:13.496445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.055 [2024-11-20 05:33:13.501633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.055 [2024-11-20 05:33:13.501747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.055 [2024-11-20 05:33:13.501781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.055 [2024-11-20 05:33:13.506914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.055 [2024-11-20 05:33:13.507019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.055 [2024-11-20 05:33:13.507050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.055 [2024-11-20 05:33:13.512191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.055 [2024-11-20 05:33:13.512272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.055 [2024-11-20 05:33:13.512302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.055 [2024-11-20 05:33:13.517484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.055 [2024-11-20 05:33:13.517566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.055 [2024-11-20 05:33:13.517597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.055 [2024-11-20 05:33:13.522596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.055 [2024-11-20 05:33:13.522692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.055 [2024-11-20 05:33:13.522721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.055 [2024-11-20 05:33:13.527771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.055 [2024-11-20 05:33:13.527877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.055 [2024-11-20 05:33:13.527939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.055 [2024-11-20 05:33:13.533182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.055 [2024-11-20 05:33:13.533279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.055 [2024-11-20 05:33:13.533313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.055 [2024-11-20 05:33:13.538422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.055 [2024-11-20 05:33:13.538521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.055 [2024-11-20 05:33:13.538554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.055 [2024-11-20 05:33:13.543675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.055 [2024-11-20 05:33:13.543780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.055 [2024-11-20 05:33:13.543814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.055 [2024-11-20 05:33:13.549053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.055 [2024-11-20 05:33:13.549155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.055 [2024-11-20 05:33:13.549188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.055 [2024-11-20 05:33:13.554329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.055 [2024-11-20 05:33:13.554428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.055 [2024-11-20 05:33:13.554461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.055 [2024-11-20 05:33:13.559545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.055 [2024-11-20 05:33:13.559657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.055 [2024-11-20 05:33:13.559690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.315 [2024-11-20 05:33:13.564935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.315 [2024-11-20 05:33:13.565047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.315 [2024-11-20 05:33:13.565080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.315 [2024-11-20 05:33:13.570318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.315 [2024-11-20 05:33:13.570432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.315 [2024-11-20 05:33:13.570469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.315 [2024-11-20 05:33:13.575679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.315 [2024-11-20 05:33:13.575793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.316 [2024-11-20 05:33:13.575846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.316 [2024-11-20 05:33:13.581056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.316 [2024-11-20 05:33:13.581178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.316 [2024-11-20 05:33:13.581212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.316 [2024-11-20 05:33:13.586349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.316 [2024-11-20 05:33:13.586447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.316 [2024-11-20 05:33:13.586479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.316 [2024-11-20 05:33:13.591541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.316 [2024-11-20 05:33:13.591627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.316 [2024-11-20 05:33:13.591656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.316 [2024-11-20 05:33:13.596970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.316 [2024-11-20 05:33:13.597067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.316 [2024-11-20 05:33:13.597104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.316 [2024-11-20 05:33:13.602340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.316 [2024-11-20 05:33:13.602477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.316 [2024-11-20 05:33:13.602517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.316 [2024-11-20 05:33:13.607566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.316 [2024-11-20 05:33:13.607636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.316 [2024-11-20 05:33:13.607666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.316 [2024-11-20 05:33:13.612798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.316 [2024-11-20 05:33:13.612886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.316 [2024-11-20 05:33:13.612931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.316 [2024-11-20 05:33:13.618007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.316 [2024-11-20 05:33:13.618078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.316 [2024-11-20 05:33:13.618109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.316 [2024-11-20 05:33:13.623218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.316 [2024-11-20 05:33:13.623287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.316 [2024-11-20 05:33:13.623316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.316 [2024-11-20 05:33:13.628488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.316 [2024-11-20 05:33:13.628572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.316 [2024-11-20 05:33:13.628602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.316 [2024-11-20 05:33:13.633692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.316 [2024-11-20 05:33:13.633787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.316 [2024-11-20 05:33:13.633821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.316 [2024-11-20 05:33:13.638943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.316 [2024-11-20 05:33:13.639020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.316 [2024-11-20 05:33:13.639051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.316 [2024-11-20 05:33:13.644148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.316 [2024-11-20 05:33:13.644231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.316 [2024-11-20 05:33:13.644260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.316 [2024-11-20 05:33:13.649298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.316 [2024-11-20 05:33:13.649367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.316 [2024-11-20 05:33:13.649395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.316 [2024-11-20 05:33:13.654533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.316 [2024-11-20 05:33:13.654603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.316 [2024-11-20 05:33:13.654631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.316 [2024-11-20 05:33:13.659691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.316 [2024-11-20 05:33:13.659791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.316 [2024-11-20 05:33:13.659818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.316 [2024-11-20 05:33:13.664881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.316 [2024-11-20 05:33:13.664994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.316 [2024-11-20 05:33:13.665022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.316 [2024-11-20 05:33:13.670092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.316 [2024-11-20 05:33:13.670165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.316 [2024-11-20 05:33:13.670195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.316 [2024-11-20 05:33:13.675354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.316 [2024-11-20 05:33:13.675424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.316 [2024-11-20 05:33:13.675451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.316 [2024-11-20 05:33:13.680537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.316 [2024-11-20 05:33:13.680651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.316 [2024-11-20 05:33:13.680679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.316 [2024-11-20 05:33:13.685728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.316 [2024-11-20 05:33:13.685802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.316 [2024-11-20 05:33:13.685828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.316 [2024-11-20 05:33:13.690971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.316 [2024-11-20 05:33:13.691049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.316 [2024-11-20 05:33:13.691077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.316 [2024-11-20 05:33:13.696261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.316 [2024-11-20 05:33:13.696349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.316 [2024-11-20 05:33:13.696376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.316 [2024-11-20 05:33:13.701460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.316 [2024-11-20 05:33:13.701577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.316 [2024-11-20 05:33:13.701607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.316 [2024-11-20 05:33:13.706791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.316 [2024-11-20 05:33:13.706925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.316 [2024-11-20 05:33:13.706955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.316 [2024-11-20 05:33:13.712017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.316 [2024-11-20 05:33:13.712142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.317 [2024-11-20 05:33:13.712171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.317 [2024-11-20 05:33:13.717274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.317 [2024-11-20 05:33:13.717479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.317 [2024-11-20 05:33:13.717508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.317 [2024-11-20 05:33:13.722051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.317 [2024-11-20 05:33:13.722292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.317 [2024-11-20 05:33:13.722321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.317 [2024-11-20 05:33:13.727297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.317 [2024-11-20 05:33:13.727655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.317 [2024-11-20 05:33:13.727683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.317 [2024-11-20 05:33:13.732589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.317 [2024-11-20 05:33:13.732969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.317 [2024-11-20 05:33:13.733004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.317 [2024-11-20 05:33:13.737855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.317 [2024-11-20 05:33:13.738236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.317 [2024-11-20 05:33:13.738271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.317 [2024-11-20 05:33:13.743158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.317 [2024-11-20 05:33:13.743529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.317 [2024-11-20 05:33:13.743564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.317 [2024-11-20 05:33:13.748459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.317 [2024-11-20 05:33:13.748814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.317 [2024-11-20 05:33:13.748848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.317 [2024-11-20 05:33:13.753946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.317 [2024-11-20 05:33:13.754285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.317 [2024-11-20 05:33:13.754322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.317 [2024-11-20 05:33:13.758847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.317 [2024-11-20 05:33:13.758930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.317 [2024-11-20 05:33:13.758955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.317 [2024-11-20 05:33:13.763961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.317 [2024-11-20 05:33:13.764028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.317 [2024-11-20 05:33:13.764063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.317 [2024-11-20 05:33:13.769091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.317 [2024-11-20 05:33:13.769177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.317 [2024-11-20 05:33:13.769203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.317 [2024-11-20 05:33:13.774114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.317 [2024-11-20 05:33:13.774180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.317 [2024-11-20 05:33:13.774205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.317 [2024-11-20 05:33:13.779153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.317 [2024-11-20 05:33:13.779228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.317 [2024-11-20 05:33:13.779252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.317 [2024-11-20 05:33:13.784293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.317 [2024-11-20 05:33:13.784368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.317 [2024-11-20 05:33:13.784392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.317 [2024-11-20 05:33:13.789353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.317 [2024-11-20 05:33:13.789417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.317 [2024-11-20 05:33:13.789440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.317 [2024-11-20 05:33:13.794441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.317 [2024-11-20 05:33:13.794523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.317 [2024-11-20 05:33:13.794547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.317 [2024-11-20 05:33:13.799493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.317 [2024-11-20 05:33:13.799570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.317 [2024-11-20 05:33:13.799593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.317 [2024-11-20 05:33:13.804686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.317 [2024-11-20 05:33:13.804762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.317 [2024-11-20 05:33:13.804785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.317 [2024-11-20 05:33:13.809788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.317 [2024-11-20 05:33:13.809867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.317 [2024-11-20 05:33:13.809892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.317 5792.00 IOPS, 724.00 MiB/s [2024-11-20T05:33:13.830Z] [2024-11-20 05:33:13.815994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f96750) with pdu=0x200016eff3c8 00:22:59.317 [2024-11-20 05:33:13.816058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.317 [2024-11-20 05:33:13.816082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.317 00:22:59.317 Latency(us) 00:22:59.317 [2024-11-20T05:33:13.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.317 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:59.317 nvme0n1 : 2.00 5791.08 723.88 0.00 0.00 2756.54 1697.98 7864.32 00:22:59.317 [2024-11-20T05:33:13.830Z] =================================================================================================================== 00:22:59.317 [2024-11-20T05:33:13.830Z] Total : 5791.08 723.88 0.00 0.00 2756.54 1697.98 7864.32 00:22:59.317 { 00:22:59.317 "results": [ 00:22:59.317 { 00:22:59.317 "job": "nvme0n1", 00:22:59.317 "core_mask": "0x2", 00:22:59.317 "workload": "randwrite", 00:22:59.317 "status": "finished", 00:22:59.317 "queue_depth": 16, 00:22:59.317 "io_size": 131072, 00:22:59.317 "runtime": 2.004463, 00:22:59.317 "iops": 5791.0772112032, 00:22:59.317 "mibps": 723.8846514004, 00:22:59.317 "io_failed": 0, 00:22:59.317 "io_timeout": 0, 00:22:59.317 "avg_latency_us": 2756.5382620136584, 00:22:59.317 "min_latency_us": 1697.9781818181818, 00:22:59.317 "max_latency_us": 7864.32 00:22:59.317 } 00:22:59.317 ], 00:22:59.317 "core_count": 1 00:22:59.317 } 00:22:59.576 05:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:59.576 05:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:59.576 05:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:59.576 | .driver_specific 00:22:59.576 | .nvme_error 00:22:59.576 | .status_code 00:22:59.576 | .command_transient_transport_error' 00:22:59.576 05:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:59.835 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 375 > 0 )) 00:22:59.835 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80930 00:22:59.835 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 80930 ']' 00:22:59.835 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 80930 00:22:59.835 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:22:59.835 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:59.835 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80930 00:22:59.835 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:59.835 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:59.835 killing process with pid 80930 00:22:59.835 Received shutdown signal, test time was about 2.000000 seconds 00:22:59.835 00:22:59.835 Latency(us) 00:22:59.835 [2024-11-20T05:33:14.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.835 [2024-11-20T05:33:14.348Z] =================================================================================================================== 00:22:59.835 [2024-11-20T05:33:14.348Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:59.835 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80930' 00:22:59.835 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 80930 00:22:59.835 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 80930 00:22:59.835 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80745 00:22:59.835 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 80745 ']' 00:22:59.835 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 80745 00:22:59.835 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:22:59.835 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:59.835 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80745 00:22:59.835 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:59.835 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:59.835 killing process with pid 80745 00:22:59.835 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80745' 00:22:59.835 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 80745 00:22:59.835 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 80745 00:23:00.094 00:23:00.094 real 0m16.352s 00:23:00.094 user 0m32.295s 00:23:00.094 sys 0m4.310s 00:23:00.094 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:00.094 ************************************ 00:23:00.094 END TEST nvmf_digest_error 00:23:00.094 ************************************ 00:23:00.094 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:00.094 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:23:00.094 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:23:00.094 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:00.094 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:23:00.094 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:00.094 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:23:00.094 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:00.094 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:00.094 rmmod nvme_tcp 00:23:00.094 rmmod nvme_fabrics 00:23:00.094 rmmod nvme_keyring 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 80745 ']' 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 80745 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 80745 ']' 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 80745 00:23:00.353 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (80745) - No such process 00:23:00.353 Process with pid 80745 is not found 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 80745 is not found' 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:00.353 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.612 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:23:00.612 00:23:00.612 real 0m34.461s 00:23:00.612 user 1m6.136s 00:23:00.612 sys 0m9.053s 00:23:00.612 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:00.612 05:33:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:00.612 ************************************ 00:23:00.612 END TEST nvmf_digest 00:23:00.612 ************************************ 00:23:00.612 05:33:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:23:00.612 05:33:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:23:00.612 05:33:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:00.612 05:33:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:00.612 05:33:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:00.612 05:33:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.612 ************************************ 00:23:00.612 START TEST nvmf_host_multipath 00:23:00.612 ************************************ 00:23:00.613 05:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:00.613 * Looking for test storage... 00:23:00.613 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:00.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.613 --rc genhtml_branch_coverage=1 00:23:00.613 --rc genhtml_function_coverage=1 00:23:00.613 --rc genhtml_legend=1 00:23:00.613 --rc geninfo_all_blocks=1 00:23:00.613 --rc geninfo_unexecuted_blocks=1 00:23:00.613 00:23:00.613 ' 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:00.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.613 --rc genhtml_branch_coverage=1 00:23:00.613 --rc genhtml_function_coverage=1 00:23:00.613 --rc genhtml_legend=1 00:23:00.613 --rc geninfo_all_blocks=1 00:23:00.613 --rc geninfo_unexecuted_blocks=1 00:23:00.613 00:23:00.613 ' 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:00.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.613 --rc genhtml_branch_coverage=1 00:23:00.613 --rc genhtml_function_coverage=1 00:23:00.613 --rc genhtml_legend=1 00:23:00.613 --rc geninfo_all_blocks=1 00:23:00.613 --rc geninfo_unexecuted_blocks=1 00:23:00.613 00:23:00.613 ' 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:00.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.613 --rc genhtml_branch_coverage=1 00:23:00.613 --rc genhtml_function_coverage=1 00:23:00.613 --rc genhtml_legend=1 00:23:00.613 --rc geninfo_all_blocks=1 00:23:00.613 --rc geninfo_unexecuted_blocks=1 00:23:00.613 00:23:00.613 ' 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:00.613 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:00.873 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:00.873 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:00.874 Cannot find device "nvmf_init_br" 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:00.874 Cannot find device "nvmf_init_br2" 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:00.874 Cannot find device "nvmf_tgt_br" 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:00.874 Cannot find device "nvmf_tgt_br2" 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:00.874 Cannot find device "nvmf_init_br" 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:00.874 Cannot find device "nvmf_init_br2" 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:00.874 Cannot find device "nvmf_tgt_br" 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:00.874 Cannot find device "nvmf_tgt_br2" 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:00.874 Cannot find device "nvmf_br" 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:00.874 Cannot find device "nvmf_init_if" 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:00.874 Cannot find device "nvmf_init_if2" 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:00.874 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:00.874 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:00.874 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:01.134 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:01.134 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:23:01.134 00:23:01.134 --- 10.0.0.3 ping statistics --- 00:23:01.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.134 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:01.134 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:01.134 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:23:01.134 00:23:01.134 --- 10.0.0.4 ping statistics --- 00:23:01.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.134 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:01.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:01.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:23:01.134 00:23:01.134 --- 10.0.0.1 ping statistics --- 00:23:01.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.134 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:01.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:01.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:23:01.134 00:23:01.134 --- 10.0.0.2 ping statistics --- 00:23:01.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.134 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=81248 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 81248 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # '[' -z 81248 ']' 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:01.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:01.134 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:01.134 [2024-11-20 05:33:15.576085] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:23:01.134 [2024-11-20 05:33:15.576179] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.392 [2024-11-20 05:33:15.725396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:01.392 [2024-11-20 05:33:15.762948] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.392 [2024-11-20 05:33:15.763001] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.393 [2024-11-20 05:33:15.763015] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:01.393 [2024-11-20 05:33:15.763024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:01.393 [2024-11-20 05:33:15.763033] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.393 [2024-11-20 05:33:15.763966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.393 [2024-11-20 05:33:15.763977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.393 [2024-11-20 05:33:15.796763] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:01.393 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:01.393 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@866 -- # return 0 00:23:01.393 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:01.393 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:01.393 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:01.393 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.393 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=81248 00:23:01.393 05:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:01.652 [2024-11-20 05:33:16.119151] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.652 05:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:01.910 Malloc0 00:23:01.910 05:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:02.476 05:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:02.734 05:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:02.993 [2024-11-20 05:33:17.246951] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:02.993 05:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:23:03.252 [2024-11-20 05:33:17.507087] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:03.252 05:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=81292 00:23:03.252 05:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:03.252 05:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:03.252 05:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 81292 /var/tmp/bdevperf.sock 00:23:03.252 05:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # '[' -z 81292 ']' 00:23:03.252 05:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:03.252 05:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:03.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:03.252 05:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:03.252 05:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:03.252 05:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:04.219 05:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:04.219 05:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@866 -- # return 0 00:23:04.219 05:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:04.785 05:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:05.044 Nvme0n1 00:23:05.044 05:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:05.302 Nvme0n1 00:23:05.302 05:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:23:05.302 05:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:06.679 05:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:23:06.679 05:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:06.679 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:07.246 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:23:07.246 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81341 00:23:07.246 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:07.246 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81248 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:13.807 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:13.807 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:13.807 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:23:13.807 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:13.807 Attaching 4 probes... 00:23:13.807 @path[10.0.0.3, 4421]: 17168 00:23:13.807 @path[10.0.0.3, 4421]: 16516 00:23:13.807 @path[10.0.0.3, 4421]: 17564 00:23:13.807 @path[10.0.0.3, 4421]: 17472 00:23:13.808 @path[10.0.0.3, 4421]: 17352 00:23:13.808 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:13.808 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:13.808 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:13.808 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:23:13.808 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:13.808 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:13.808 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81341 00:23:13.808 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:13.808 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:23:13.808 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:13.808 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:23:14.066 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:23:14.066 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81455 00:23:14.066 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:14.066 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81248 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:20.627 05:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:20.627 05:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:20.627 05:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:23:20.627 05:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:20.627 Attaching 4 probes... 00:23:20.627 @path[10.0.0.3, 4420]: 16919 00:23:20.627 @path[10.0.0.3, 4420]: 17255 00:23:20.627 @path[10.0.0.3, 4420]: 15897 00:23:20.627 @path[10.0.0.3, 4420]: 17409 00:23:20.627 @path[10.0.0.3, 4420]: 17056 00:23:20.627 05:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:20.627 05:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:20.627 05:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:20.627 05:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:23:20.627 05:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:20.627 05:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:20.627 05:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81455 00:23:20.627 05:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:20.627 05:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:23:20.627 05:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:23:20.886 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:21.145 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:23:21.145 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81574 00:23:21.145 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81248 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:21.145 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:27.705 05:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:27.705 05:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:27.705 05:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:23:27.705 05:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:27.705 Attaching 4 probes... 00:23:27.705 @path[10.0.0.3, 4421]: 16354 00:23:27.705 @path[10.0.0.3, 4421]: 15640 00:23:27.705 @path[10.0.0.3, 4421]: 15761 00:23:27.705 @path[10.0.0.3, 4421]: 15379 00:23:27.705 @path[10.0.0.3, 4421]: 13477 00:23:27.705 05:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:27.705 05:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:27.705 05:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:27.705 05:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:23:27.705 05:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:27.705 05:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:27.705 05:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81574 00:23:27.705 05:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:27.705 05:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:23:27.705 05:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:23:27.963 05:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:23:28.531 05:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:23:28.531 05:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81692 00:23:28.531 05:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:28.531 05:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81248 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:35.091 05:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:35.091 05:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:23:35.091 05:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:23:35.091 05:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:35.091 Attaching 4 probes... 00:23:35.091 00:23:35.091 00:23:35.091 00:23:35.091 00:23:35.091 00:23:35.091 05:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:35.091 05:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:35.091 05:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:35.091 05:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:23:35.091 05:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:23:35.091 05:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:23:35.091 05:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81692 00:23:35.091 05:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:35.091 05:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:23:35.091 05:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:35.091 05:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:35.657 05:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:23:35.657 05:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81809 00:23:35.657 05:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81248 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:35.657 05:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:42.215 05:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:42.215 05:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:42.215 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:23:42.215 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:42.215 Attaching 4 probes... 00:23:42.215 @path[10.0.0.3, 4421]: 14493 00:23:42.215 @path[10.0.0.3, 4421]: 16044 00:23:42.215 @path[10.0.0.3, 4421]: 16444 00:23:42.215 @path[10.0.0.3, 4421]: 16748 00:23:42.215 @path[10.0.0.3, 4421]: 15616 00:23:42.215 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:42.215 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:42.215 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:42.215 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:23:42.215 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:42.215 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:42.215 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81809 00:23:42.215 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:42.215 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:23:42.473 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:23:43.849 05:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:23:43.849 05:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81934 00:23:43.849 05:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81248 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:43.849 05:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:50.410 05:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:50.410 05:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:50.410 05:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:23:50.410 05:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:50.410 Attaching 4 probes... 00:23:50.410 @path[10.0.0.3, 4420]: 15390 00:23:50.410 @path[10.0.0.3, 4420]: 16234 00:23:50.410 @path[10.0.0.3, 4420]: 15340 00:23:50.410 @path[10.0.0.3, 4420]: 13829 00:23:50.410 @path[10.0.0.3, 4420]: 15047 00:23:50.410 05:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:50.410 05:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:50.410 05:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:50.410 05:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:23:50.410 05:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:50.410 05:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:50.410 05:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81934 00:23:50.410 05:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:50.410 05:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:23:50.410 [2024-11-20 05:34:04.830288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:50.410 05:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:50.975 05:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:23:57.534 05:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:23:57.534 05:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=82109 00:23:57.534 05:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81248 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:57.534 05:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:24:02.860 05:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:02.861 05:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:03.456 05:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:24:03.456 05:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:03.456 Attaching 4 probes... 00:24:03.456 @path[10.0.0.3, 4421]: 16419 00:24:03.456 @path[10.0.0.3, 4421]: 16520 00:24:03.456 @path[10.0.0.3, 4421]: 16744 00:24:03.456 @path[10.0.0.3, 4421]: 16673 00:24:03.456 @path[10.0.0.3, 4421]: 16921 00:24:03.456 05:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:03.456 05:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:24:03.456 05:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:24:03.456 05:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:24:03.456 05:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:03.456 05:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:03.456 05:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 82109 00:24:03.456 05:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:03.456 05:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 81292 00:24:03.456 05:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' -z 81292 ']' 00:24:03.456 05:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # kill -0 81292 00:24:03.456 05:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # uname 00:24:03.456 05:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:03.456 05:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81292 00:24:03.456 killing process with pid 81292 00:24:03.456 05:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:03.456 05:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:03.456 05:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81292' 00:24:03.456 05:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@971 -- # kill 81292 00:24:03.456 05:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@976 -- # wait 81292 00:24:03.456 { 00:24:03.456 "results": [ 00:24:03.456 { 00:24:03.456 "job": "Nvme0n1", 00:24:03.456 "core_mask": "0x4", 00:24:03.456 "workload": "verify", 00:24:03.456 "status": "terminated", 00:24:03.456 "verify_range": { 00:24:03.456 "start": 0, 00:24:03.456 "length": 16384 00:24:03.456 }, 00:24:03.456 "queue_depth": 128, 00:24:03.456 "io_size": 4096, 00:24:03.456 "runtime": 57.797236, 00:24:03.456 "iops": 6860.414570689851, 00:24:03.456 "mibps": 26.79849441675723, 00:24:03.456 "io_failed": 0, 00:24:03.456 "io_timeout": 0, 00:24:03.456 "avg_latency_us": 18624.99353444562, 00:24:03.456 "min_latency_us": 644.189090909091, 00:24:03.456 "max_latency_us": 7046430.72 00:24:03.456 } 00:24:03.456 ], 00:24:03.456 "core_count": 1 00:24:03.456 } 00:24:03.457 05:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 81292 00:24:03.457 05:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:03.457 [2024-11-20 05:33:17.606717] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:24:03.457 [2024-11-20 05:33:17.606872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81292 ] 00:24:03.457 [2024-11-20 05:33:17.763090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.457 [2024-11-20 05:33:17.801716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.457 [2024-11-20 05:33:17.834333] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:03.457 Running I/O for 90 seconds... 00:24:03.457 7760.00 IOPS, 30.31 MiB/s [2024-11-20T05:34:17.970Z] 8325.50 IOPS, 32.52 MiB/s [2024-11-20T05:34:17.970Z] 8502.33 IOPS, 33.21 MiB/s [2024-11-20T05:34:17.970Z] 8450.75 IOPS, 33.01 MiB/s [2024-11-20T05:34:17.970Z] 8509.40 IOPS, 33.24 MiB/s [2024-11-20T05:34:17.970Z] 8553.83 IOPS, 33.41 MiB/s [2024-11-20T05:34:17.970Z] 8567.29 IOPS, 33.47 MiB/s [2024-11-20T05:34:17.970Z] 8603.38 IOPS, 33.61 MiB/s [2024-11-20T05:34:17.970Z] [2024-11-20 05:33:28.478513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.457 [2024-11-20 05:33:28.478585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.478645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.457 [2024-11-20 05:33:28.478668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.478691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.457 [2024-11-20 05:33:28.478707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.478729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.457 [2024-11-20 05:33:28.478750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.478772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.457 [2024-11-20 05:33:28.478787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.478808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.457 [2024-11-20 05:33:28.478823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.478845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.457 [2024-11-20 05:33:28.478861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.478882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.457 [2024-11-20 05:33:28.478897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.478935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.457 [2024-11-20 05:33:28.478952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.478974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.457 [2024-11-20 05:33:28.479019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.479043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.457 [2024-11-20 05:33:28.479059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.479080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.457 [2024-11-20 05:33:28.479095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.479117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.457 [2024-11-20 05:33:28.479132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.479153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.457 [2024-11-20 05:33:28.479168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.479190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.457 [2024-11-20 05:33:28.479205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.479227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.457 [2024-11-20 05:33:28.479242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.479264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.457 [2024-11-20 05:33:28.479280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.479303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.457 [2024-11-20 05:33:28.479318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.479340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.457 [2024-11-20 05:33:28.479355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.479377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.457 [2024-11-20 05:33:28.479392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.479413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.457 [2024-11-20 05:33:28.479429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.479450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.457 [2024-11-20 05:33:28.479473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.479497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.457 [2024-11-20 05:33:28.479513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.479535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.457 [2024-11-20 05:33:28.479551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.479593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.457 [2024-11-20 05:33:28.479615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.479638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.457 [2024-11-20 05:33:28.479653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.479675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.457 [2024-11-20 05:33:28.479691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.479712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.457 [2024-11-20 05:33:28.479728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.479749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.457 [2024-11-20 05:33:28.479765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.479787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.457 [2024-11-20 05:33:28.479802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.479824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.457 [2024-11-20 05:33:28.479861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.479883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.457 [2024-11-20 05:33:28.479899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.479937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.457 [2024-11-20 05:33:28.479954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:03.457 [2024-11-20 05:33:28.479976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.457 [2024-11-20 05:33:28.479991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.480032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.458 [2024-11-20 05:33:28.480049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.480071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.458 [2024-11-20 05:33:28.480086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.480108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.458 [2024-11-20 05:33:28.480123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.480145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.458 [2024-11-20 05:33:28.480160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.480182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.458 [2024-11-20 05:33:28.480197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.480219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.458 [2024-11-20 05:33:28.480234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.480256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.458 [2024-11-20 05:33:28.480272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.480293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.458 [2024-11-20 05:33:28.480308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.480330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.458 [2024-11-20 05:33:28.480346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.480367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.458 [2024-11-20 05:33:28.480383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.480405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.458 [2024-11-20 05:33:28.480420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.480442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.458 [2024-11-20 05:33:28.480457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.480486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.458 [2024-11-20 05:33:28.480502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.480524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.458 [2024-11-20 05:33:28.480540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.480562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.458 [2024-11-20 05:33:28.480578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.480601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.458 [2024-11-20 05:33:28.480617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.480639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.458 [2024-11-20 05:33:28.480655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.480677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.458 [2024-11-20 05:33:28.480692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.480714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.458 [2024-11-20 05:33:28.480729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.480751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.458 [2024-11-20 05:33:28.480767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.480788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.458 [2024-11-20 05:33:28.480803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.480825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.458 [2024-11-20 05:33:28.480840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.480867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.458 [2024-11-20 05:33:28.480884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.480918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.458 [2024-11-20 05:33:28.480938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.480960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.458 [2024-11-20 05:33:28.480984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.481008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.458 [2024-11-20 05:33:28.481024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.481046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.458 [2024-11-20 05:33:28.481066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.481088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.458 [2024-11-20 05:33:28.481104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.481126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.458 [2024-11-20 05:33:28.481141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.481163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.458 [2024-11-20 05:33:28.481179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.481201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.458 [2024-11-20 05:33:28.481217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.481238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.458 [2024-11-20 05:33:28.481254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.481276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.458 [2024-11-20 05:33:28.481292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.481314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.458 [2024-11-20 05:33:28.481330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.481351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.458 [2024-11-20 05:33:28.481367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.481388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.458 [2024-11-20 05:33:28.481404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.481425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.458 [2024-11-20 05:33:28.481447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:03.458 [2024-11-20 05:33:28.481470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.458 [2024-11-20 05:33:28.481486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.481508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.459 [2024-11-20 05:33:28.481524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.481546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.459 [2024-11-20 05:33:28.481561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.481583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.459 [2024-11-20 05:33:28.481599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.481620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.459 [2024-11-20 05:33:28.481636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.481657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.459 [2024-11-20 05:33:28.481673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.481694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.459 [2024-11-20 05:33:28.481710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.481731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.459 [2024-11-20 05:33:28.481747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.481768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.459 [2024-11-20 05:33:28.481784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.481806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.459 [2024-11-20 05:33:28.481822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.481843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.459 [2024-11-20 05:33:28.481859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.481881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.459 [2024-11-20 05:33:28.481897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.481942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.459 [2024-11-20 05:33:28.481960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.481981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.459 [2024-11-20 05:33:28.481997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.482019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.459 [2024-11-20 05:33:28.482034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.482056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.459 [2024-11-20 05:33:28.482072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.482095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.459 [2024-11-20 05:33:28.482110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.482136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.459 [2024-11-20 05:33:28.482153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.482175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.459 [2024-11-20 05:33:28.482191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.482218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.459 [2024-11-20 05:33:28.482233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.482255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.459 [2024-11-20 05:33:28.482271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.482293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.459 [2024-11-20 05:33:28.482309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.482331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.459 [2024-11-20 05:33:28.482346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.482367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.459 [2024-11-20 05:33:28.482383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.482412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.459 [2024-11-20 05:33:28.482428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.482450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.459 [2024-11-20 05:33:28.482466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.482487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.459 [2024-11-20 05:33:28.482503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.482524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.459 [2024-11-20 05:33:28.482547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.482569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.459 [2024-11-20 05:33:28.482585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.482607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.459 [2024-11-20 05:33:28.482623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.482645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.459 [2024-11-20 05:33:28.482660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.482682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.459 [2024-11-20 05:33:28.482698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.482725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.459 [2024-11-20 05:33:28.482741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.482763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.459 [2024-11-20 05:33:28.482779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.482800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.459 [2024-11-20 05:33:28.482816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.482838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.459 [2024-11-20 05:33:28.482854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.482875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.459 [2024-11-20 05:33:28.482897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.482938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.459 [2024-11-20 05:33:28.482955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.482977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.459 [2024-11-20 05:33:28.482993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:03.459 [2024-11-20 05:33:28.483015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.460 [2024-11-20 05:33:28.483031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:28.484497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.460 [2024-11-20 05:33:28.484531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:28.484560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.460 [2024-11-20 05:33:28.484577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:28.484600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.460 [2024-11-20 05:33:28.484616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:28.484638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.460 [2024-11-20 05:33:28.484657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:28.484679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.460 [2024-11-20 05:33:28.484695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:28.484717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.460 [2024-11-20 05:33:28.484733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:28.484755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.460 [2024-11-20 05:33:28.484770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:28.484792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.460 [2024-11-20 05:33:28.484808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:28.484983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.460 [2024-11-20 05:33:28.485022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:28.485051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.460 [2024-11-20 05:33:28.485068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:28.485090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.460 [2024-11-20 05:33:28.485106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:28.485128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.460 [2024-11-20 05:33:28.485144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:28.485166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:66160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.460 [2024-11-20 05:33:28.485181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:28.485203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.460 [2024-11-20 05:33:28.485219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:28.485240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.460 [2024-11-20 05:33:28.485256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:28.485277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.460 [2024-11-20 05:33:28.485293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:28.485315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.460 [2024-11-20 05:33:28.485330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:03.460 8622.67 IOPS, 33.68 MiB/s [2024-11-20T05:34:17.973Z] 8622.00 IOPS, 33.68 MiB/s [2024-11-20T05:34:17.973Z] 8612.73 IOPS, 33.64 MiB/s [2024-11-20T05:34:17.973Z] 8557.67 IOPS, 33.43 MiB/s [2024-11-20T05:34:17.973Z] 8568.92 IOPS, 33.47 MiB/s [2024-11-20T05:34:17.973Z] 8566.57 IOPS, 33.46 MiB/s [2024-11-20T05:34:17.973Z] 8548.00 IOPS, 33.39 MiB/s [2024-11-20T05:34:17.973Z] [2024-11-20 05:33:35.182922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.460 [2024-11-20 05:33:35.183010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:35.183066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.460 [2024-11-20 05:33:35.183088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:35.183112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.460 [2024-11-20 05:33:35.183129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:35.183187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.460 [2024-11-20 05:33:35.183204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:35.183226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.460 [2024-11-20 05:33:35.183241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:35.183263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.460 [2024-11-20 05:33:35.183278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:35.183300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.460 [2024-11-20 05:33:35.183315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:35.183336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.460 [2024-11-20 05:33:35.183351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:35.183373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.460 [2024-11-20 05:33:35.183388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:35.183410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.460 [2024-11-20 05:33:35.183424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:35.183446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.460 [2024-11-20 05:33:35.183460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:35.183483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.460 [2024-11-20 05:33:35.183497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:35.183519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.460 [2024-11-20 05:33:35.183534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:35.183556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.460 [2024-11-20 05:33:35.183571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:35.183592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.460 [2024-11-20 05:33:35.183607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:35.183629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.460 [2024-11-20 05:33:35.183655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:35.184096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.460 [2024-11-20 05:33:35.184126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:35.184156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.460 [2024-11-20 05:33:35.184173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:35.184195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.460 [2024-11-20 05:33:35.184211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:03.460 [2024-11-20 05:33:35.184233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.460 [2024-11-20 05:33:35.184248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.184271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.461 [2024-11-20 05:33:35.184286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.184308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.461 [2024-11-20 05:33:35.184323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.184347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.461 [2024-11-20 05:33:35.184362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.184384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.461 [2024-11-20 05:33:35.184399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.186811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.461 [2024-11-20 05:33:35.186857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.186891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.461 [2024-11-20 05:33:35.186927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.186953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.461 [2024-11-20 05:33:35.186968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.186992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.461 [2024-11-20 05:33:35.187048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.187081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.461 [2024-11-20 05:33:35.187098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.187119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.461 [2024-11-20 05:33:35.187135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.187158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.461 [2024-11-20 05:33:35.187173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.187195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.461 [2024-11-20 05:33:35.187210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.187231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.461 [2024-11-20 05:33:35.187246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.187268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.461 [2024-11-20 05:33:35.187283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.187305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:119984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.461 [2024-11-20 05:33:35.187320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.187343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:119992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.461 [2024-11-20 05:33:35.187358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.187380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.461 [2024-11-20 05:33:35.187395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.187417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.461 [2024-11-20 05:33:35.187433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.187454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.461 [2024-11-20 05:33:35.187470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.187499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.461 [2024-11-20 05:33:35.187523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.187546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.461 [2024-11-20 05:33:35.187562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.187584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.461 [2024-11-20 05:33:35.187600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.187621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.461 [2024-11-20 05:33:35.187636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.187658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.461 [2024-11-20 05:33:35.187673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.187695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.461 [2024-11-20 05:33:35.187710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.187732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.461 [2024-11-20 05:33:35.187748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.187771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.461 [2024-11-20 05:33:35.187786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.187807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.461 [2024-11-20 05:33:35.187823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.189275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.461 [2024-11-20 05:33:35.189327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.189378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.461 [2024-11-20 05:33:35.189408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.189446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.461 [2024-11-20 05:33:35.189473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:03.461 [2024-11-20 05:33:35.189510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.461 [2024-11-20 05:33:35.189537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.189593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.462 [2024-11-20 05:33:35.189622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.189658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.462 [2024-11-20 05:33:35.189685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.189725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.462 [2024-11-20 05:33:35.189754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.189791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.462 [2024-11-20 05:33:35.189818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.189854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.462 [2024-11-20 05:33:35.189881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.189941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.462 [2024-11-20 05:33:35.189970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.190006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.462 [2024-11-20 05:33:35.190034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.190066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.462 [2024-11-20 05:33:35.190091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.190123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.462 [2024-11-20 05:33:35.190146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.190178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.462 [2024-11-20 05:33:35.190201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.190232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.462 [2024-11-20 05:33:35.190254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.190284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.462 [2024-11-20 05:33:35.190307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.190376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.462 [2024-11-20 05:33:35.190402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.190437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.462 [2024-11-20 05:33:35.190462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.190495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.462 [2024-11-20 05:33:35.190521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.190550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.462 [2024-11-20 05:33:35.190572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.190616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.462 [2024-11-20 05:33:35.190639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.190669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.462 [2024-11-20 05:33:35.190692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.190736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.462 [2024-11-20 05:33:35.190760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.190792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.462 [2024-11-20 05:33:35.190818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.190855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.462 [2024-11-20 05:33:35.190883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.190946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.462 [2024-11-20 05:33:35.190979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.191017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.462 [2024-11-20 05:33:35.191044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.191081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.462 [2024-11-20 05:33:35.191108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.191146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.462 [2024-11-20 05:33:35.191191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.191229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.462 [2024-11-20 05:33:35.191267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.191303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.462 [2024-11-20 05:33:35.191330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.191365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.462 [2024-11-20 05:33:35.191393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.193265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.462 [2024-11-20 05:33:35.193302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.193335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.462 [2024-11-20 05:33:35.193352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.193375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.462 [2024-11-20 05:33:35.193392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.193415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.462 [2024-11-20 05:33:35.193430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.193452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.462 [2024-11-20 05:33:35.193467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.193490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.462 [2024-11-20 05:33:35.193507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.193529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.462 [2024-11-20 05:33:35.193544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.193566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.462 [2024-11-20 05:33:35.193582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.193604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.462 [2024-11-20 05:33:35.193634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.193658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.462 [2024-11-20 05:33:35.193675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:03.462 [2024-11-20 05:33:35.193712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.462 [2024-11-20 05:33:35.193732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.193755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.463 [2024-11-20 05:33:35.193771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.193793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.463 [2024-11-20 05:33:35.193809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.193832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.463 [2024-11-20 05:33:35.193849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.193871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.463 [2024-11-20 05:33:35.193887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.193926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.463 [2024-11-20 05:33:35.193946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.193968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.463 [2024-11-20 05:33:35.193984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.194006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.463 [2024-11-20 05:33:35.194022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.194044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.463 [2024-11-20 05:33:35.194060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.194082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.463 [2024-11-20 05:33:35.194098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.194120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.463 [2024-11-20 05:33:35.194146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.194169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.463 [2024-11-20 05:33:35.194185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.194207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:120272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.463 [2024-11-20 05:33:35.194223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.194244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.463 [2024-11-20 05:33:35.194260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.194282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.463 [2024-11-20 05:33:35.194298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.194320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.463 [2024-11-20 05:33:35.194335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.194357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.463 [2024-11-20 05:33:35.194373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.194395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.463 [2024-11-20 05:33:35.194410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.194433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.463 [2024-11-20 05:33:35.194449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.194471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:120328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.463 [2024-11-20 05:33:35.194487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.194510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.463 [2024-11-20 05:33:35.194525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.194548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.463 [2024-11-20 05:33:35.194563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.197172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.463 [2024-11-20 05:33:35.197205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.197248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.463 [2024-11-20 05:33:35.197267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.197290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.463 [2024-11-20 05:33:35.197306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.197328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.463 [2024-11-20 05:33:35.197343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.197365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.463 [2024-11-20 05:33:35.197381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.197403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.463 [2024-11-20 05:33:35.197418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.197440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.463 [2024-11-20 05:33:35.197455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.197477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.463 [2024-11-20 05:33:35.197493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.197514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.463 [2024-11-20 05:33:35.197530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.197551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.463 [2024-11-20 05:33:35.197567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.197588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.463 [2024-11-20 05:33:35.197604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.197625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.463 [2024-11-20 05:33:35.197641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.197663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.463 [2024-11-20 05:33:35.197681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.197727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.463 [2024-11-20 05:33:35.197746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.197769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.463 [2024-11-20 05:33:35.197785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.197807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.463 [2024-11-20 05:33:35.197822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.197844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.463 [2024-11-20 05:33:35.197859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:03.463 [2024-11-20 05:33:35.197881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.464 [2024-11-20 05:33:35.197897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.197937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.464 [2024-11-20 05:33:35.197955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.197977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.464 [2024-11-20 05:33:35.197993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.198015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.464 [2024-11-20 05:33:35.198031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.198053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.464 [2024-11-20 05:33:35.198068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.198090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.464 [2024-11-20 05:33:35.198106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.198128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.464 [2024-11-20 05:33:35.198144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.198166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.464 [2024-11-20 05:33:35.198182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.198204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.464 [2024-11-20 05:33:35.198230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.198253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.464 [2024-11-20 05:33:35.198269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.198291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.464 [2024-11-20 05:33:35.198306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.198329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.464 [2024-11-20 05:33:35.198344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.198366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.464 [2024-11-20 05:33:35.198382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.198404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.464 [2024-11-20 05:33:35.198420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.198444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.464 [2024-11-20 05:33:35.198460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.198886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.464 [2024-11-20 05:33:35.198932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.198962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.464 [2024-11-20 05:33:35.198980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.199003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.464 [2024-11-20 05:33:35.199018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.199041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.464 [2024-11-20 05:33:35.199057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.199079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.464 [2024-11-20 05:33:35.199094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.199116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.464 [2024-11-20 05:33:35.199143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.199167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.464 [2024-11-20 05:33:35.199184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.199207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.464 [2024-11-20 05:33:35.199222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.201627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.464 [2024-11-20 05:33:35.201662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.201721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.464 [2024-11-20 05:33:35.201747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.201771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.464 [2024-11-20 05:33:35.201788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.201810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.464 [2024-11-20 05:33:35.201826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.201848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.464 [2024-11-20 05:33:35.201863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.201886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.464 [2024-11-20 05:33:35.201915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.201943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.464 [2024-11-20 05:33:35.201959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.201984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.464 [2024-11-20 05:33:35.201999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.202022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.464 [2024-11-20 05:33:35.202037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.202061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.464 [2024-11-20 05:33:35.202089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.202114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:119984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.464 [2024-11-20 05:33:35.202131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.202153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:119992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.464 [2024-11-20 05:33:35.202169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.202191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.464 [2024-11-20 05:33:35.202207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.202229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.464 [2024-11-20 05:33:35.202245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:03.464 [2024-11-20 05:33:35.202267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.465 [2024-11-20 05:33:35.202283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.202305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.465 [2024-11-20 05:33:35.202321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.202343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.465 [2024-11-20 05:33:35.202358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.202381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.465 [2024-11-20 05:33:35.202397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.202419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.465 [2024-11-20 05:33:35.202434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.202456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.465 [2024-11-20 05:33:35.202472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.202494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.465 [2024-11-20 05:33:35.202510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.202533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.465 [2024-11-20 05:33:35.202548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.202578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.465 [2024-11-20 05:33:35.202595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.202618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.465 [2024-11-20 05:33:35.202634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.204167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.465 [2024-11-20 05:33:35.204199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.204231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.465 [2024-11-20 05:33:35.204249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.204273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.465 [2024-11-20 05:33:35.204290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.204315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.465 [2024-11-20 05:33:35.204331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.204355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.465 [2024-11-20 05:33:35.204371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.204395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.465 [2024-11-20 05:33:35.204411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.204435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.465 [2024-11-20 05:33:35.204450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.204474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.465 [2024-11-20 05:33:35.204489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.204513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.465 [2024-11-20 05:33:35.204529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.204552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.465 [2024-11-20 05:33:35.204568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.204610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.465 [2024-11-20 05:33:35.204627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.204652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.465 [2024-11-20 05:33:35.204667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.204692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.465 [2024-11-20 05:33:35.204719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.204746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.465 [2024-11-20 05:33:35.204763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.204787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.465 [2024-11-20 05:33:35.204802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.204827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.465 [2024-11-20 05:33:35.204843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.204867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.465 [2024-11-20 05:33:35.204883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.204921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.465 [2024-11-20 05:33:35.204941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.204966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.465 [2024-11-20 05:33:35.204983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.205007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.465 [2024-11-20 05:33:35.205022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.205046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.465 [2024-11-20 05:33:35.205062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.205086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.465 [2024-11-20 05:33:35.205102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:03.465 [2024-11-20 05:33:35.205126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.466 [2024-11-20 05:33:35.205162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.205187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.466 [2024-11-20 05:33:35.205203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.205228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.466 [2024-11-20 05:33:35.205243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.205267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.466 [2024-11-20 05:33:35.205283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.205307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.466 [2024-11-20 05:33:35.205322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.205347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.466 [2024-11-20 05:33:35.205362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.205386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.466 [2024-11-20 05:33:35.205402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.205425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.466 [2024-11-20 05:33:35.205441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.205465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.466 [2024-11-20 05:33:35.205481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.205505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.466 [2024-11-20 05:33:35.205522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.206241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.466 [2024-11-20 05:33:35.206266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.206297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.466 [2024-11-20 05:33:35.206313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.206340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.466 [2024-11-20 05:33:35.206366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.206394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.466 [2024-11-20 05:33:35.206410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.206436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.466 [2024-11-20 05:33:35.206452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.206479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.466 [2024-11-20 05:33:35.206494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.206520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.466 [2024-11-20 05:33:35.206536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.206562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.466 [2024-11-20 05:33:35.206578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.206604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.466 [2024-11-20 05:33:35.206619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.206645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.466 [2024-11-20 05:33:35.206661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.206692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.466 [2024-11-20 05:33:35.206721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.206755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.466 [2024-11-20 05:33:35.206772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.206799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.466 [2024-11-20 05:33:35.206814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.206841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.466 [2024-11-20 05:33:35.206856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.206882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.466 [2024-11-20 05:33:35.206924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.206957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.466 [2024-11-20 05:33:35.206974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.207001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.466 [2024-11-20 05:33:35.207017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.207043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.466 [2024-11-20 05:33:35.207059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.207085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.466 [2024-11-20 05:33:35.207101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.207127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.466 [2024-11-20 05:33:35.207142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.207169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.466 [2024-11-20 05:33:35.207184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.207210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.466 [2024-11-20 05:33:35.207226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.207251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:120272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.466 [2024-11-20 05:33:35.207267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.207293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.466 [2024-11-20 05:33:35.207308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.207334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.466 [2024-11-20 05:33:35.207350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.207375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.466 [2024-11-20 05:33:35.207391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.207417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.466 [2024-11-20 05:33:35.207432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:03.466 [2024-11-20 05:33:35.207467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:120312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.466 [2024-11-20 05:33:35.207484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.207510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.467 [2024-11-20 05:33:35.207526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.207552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.467 [2024-11-20 05:33:35.207567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.207593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.467 [2024-11-20 05:33:35.207609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.207635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.467 [2024-11-20 05:33:35.207652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.209997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.467 [2024-11-20 05:33:35.210039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.210078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.467 [2024-11-20 05:33:35.210096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.210125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.467 [2024-11-20 05:33:35.210142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.210170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.467 [2024-11-20 05:33:35.210186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.210214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.467 [2024-11-20 05:33:35.210230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.210258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.467 [2024-11-20 05:33:35.210273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.210301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.467 [2024-11-20 05:33:35.210317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.210361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.467 [2024-11-20 05:33:35.210379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.210407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.467 [2024-11-20 05:33:35.210422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.210450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.467 [2024-11-20 05:33:35.210466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.210494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.467 [2024-11-20 05:33:35.210509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.210537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.467 [2024-11-20 05:33:35.210553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.210581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.467 [2024-11-20 05:33:35.210597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.210625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.467 [2024-11-20 05:33:35.210641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.210668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.467 [2024-11-20 05:33:35.210685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.210713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.467 [2024-11-20 05:33:35.210729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.210757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.467 [2024-11-20 05:33:35.210772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.210800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.467 [2024-11-20 05:33:35.210816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.210844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.467 [2024-11-20 05:33:35.210860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.210887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.467 [2024-11-20 05:33:35.210926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.210958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.467 [2024-11-20 05:33:35.210974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.211002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.467 [2024-11-20 05:33:35.211018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.211046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.467 [2024-11-20 05:33:35.211062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.211090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.467 [2024-11-20 05:33:35.211105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.211134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.467 [2024-11-20 05:33:35.211149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.211177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.467 [2024-11-20 05:33:35.211193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.211221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.467 [2024-11-20 05:33:35.211237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.211265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.467 [2024-11-20 05:33:35.211281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.211309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.467 [2024-11-20 05:33:35.211325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.211353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.467 [2024-11-20 05:33:35.211368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.211396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.467 [2024-11-20 05:33:35.211412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:03.467 [2024-11-20 05:33:35.211440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.467 [2024-11-20 05:33:35.211464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:03.467 8104.75 IOPS, 31.66 MiB/s [2024-11-20T05:34:17.980Z] 7989.82 IOPS, 31.21 MiB/s [2024-11-20T05:34:17.980Z] 7991.17 IOPS, 31.22 MiB/s [2024-11-20T05:34:17.980Z] 7991.63 IOPS, 31.22 MiB/s [2024-11-20T05:34:17.980Z] 7986.85 IOPS, 31.20 MiB/s [2024-11-20T05:34:17.980Z] 7904.05 IOPS, 30.88 MiB/s [2024-11-20T05:34:17.980Z] 7911.68 IOPS, 30.91 MiB/s [2024-11-20T05:34:17.980Z] [2024-11-20 05:33:42.714697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.468 [2024-11-20 05:33:42.714806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.714930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.468 [2024-11-20 05:33:42.714971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.715011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.468 [2024-11-20 05:33:42.715039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.715075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.468 [2024-11-20 05:33:42.715100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.715135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.468 [2024-11-20 05:33:42.715162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.715197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.468 [2024-11-20 05:33:42.715222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.715258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.468 [2024-11-20 05:33:42.715283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.715308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.468 [2024-11-20 05:33:42.715324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.715346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.468 [2024-11-20 05:33:42.715373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.715395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.468 [2024-11-20 05:33:42.715410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.715431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.468 [2024-11-20 05:33:42.715446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.715500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.468 [2024-11-20 05:33:42.715517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.715540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.468 [2024-11-20 05:33:42.715555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.715578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.468 [2024-11-20 05:33:42.715593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.715615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.468 [2024-11-20 05:33:42.715630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.715652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.468 [2024-11-20 05:33:42.715667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.715689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.468 [2024-11-20 05:33:42.715704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.715728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.468 [2024-11-20 05:33:42.715744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.715766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.468 [2024-11-20 05:33:42.715782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.715805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.468 [2024-11-20 05:33:42.715820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.715862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.468 [2024-11-20 05:33:42.715880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.715918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.468 [2024-11-20 05:33:42.715938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.715961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.468 [2024-11-20 05:33:42.715977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.716002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.468 [2024-11-20 05:33:42.716045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.716080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.468 [2024-11-20 05:33:42.716098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.716121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.468 [2024-11-20 05:33:42.716137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.716159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.468 [2024-11-20 05:33:42.716174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.716206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.468 [2024-11-20 05:33:42.716221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.716243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.468 [2024-11-20 05:33:42.716258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.716281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.468 [2024-11-20 05:33:42.716296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.716319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.468 [2024-11-20 05:33:42.716334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.716356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.468 [2024-11-20 05:33:42.716371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.716393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.468 [2024-11-20 05:33:42.716409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.716432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.468 [2024-11-20 05:33:42.716447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.716469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.468 [2024-11-20 05:33:42.716485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.716508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.468 [2024-11-20 05:33:42.716533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.716556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.468 [2024-11-20 05:33:42.716572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:03.468 [2024-11-20 05:33:42.716594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.468 [2024-11-20 05:33:42.716609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.716632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.469 [2024-11-20 05:33:42.716648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.716670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.469 [2024-11-20 05:33:42.716687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.716709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-11-20 05:33:42.716724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.716747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-11-20 05:33:42.716762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.716784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-11-20 05:33:42.716800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.716822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-11-20 05:33:42.716838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.716860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-11-20 05:33:42.716876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.716897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-11-20 05:33:42.716933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.716958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-11-20 05:33:42.716975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.716999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-11-20 05:33:42.717027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.717063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-11-20 05:33:42.717081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.717103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-11-20 05:33:42.717119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.717141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-11-20 05:33:42.717157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.717179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-11-20 05:33:42.717195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.717217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-11-20 05:33:42.717233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.717255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-11-20 05:33:42.717270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.717292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-11-20 05:33:42.717317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.717339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-11-20 05:33:42.717355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.717381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.469 [2024-11-20 05:33:42.717398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.717420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.469 [2024-11-20 05:33:42.717436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.717457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.469 [2024-11-20 05:33:42.717473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.717495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.469 [2024-11-20 05:33:42.717510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.717540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.469 [2024-11-20 05:33:42.717558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.717580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.469 [2024-11-20 05:33:42.717596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.717618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.469 [2024-11-20 05:33:42.717634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.717655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.469 [2024-11-20 05:33:42.717671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.717693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.469 [2024-11-20 05:33:42.717708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.717730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.469 [2024-11-20 05:33:42.717746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.717767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.469 [2024-11-20 05:33:42.717784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.717806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.469 [2024-11-20 05:33:42.717821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.717843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.469 [2024-11-20 05:33:42.717859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.717880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.469 [2024-11-20 05:33:42.717896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.717937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.469 [2024-11-20 05:33:42.717954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.717977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.469 [2024-11-20 05:33:42.717995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.718030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-11-20 05:33:42.718058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.718082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-11-20 05:33:42.718098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.718120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-11-20 05:33:42.718136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:03.469 [2024-11-20 05:33:42.718158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-11-20 05:33:42.718173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.718195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.470 [2024-11-20 05:33:42.718210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.718232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.470 [2024-11-20 05:33:42.718248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.718269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.470 [2024-11-20 05:33:42.718285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.718306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.470 [2024-11-20 05:33:42.718322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.718344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.470 [2024-11-20 05:33:42.718359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.718381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.470 [2024-11-20 05:33:42.718396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.718418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.470 [2024-11-20 05:33:42.718434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.718457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.470 [2024-11-20 05:33:42.718472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.718494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.470 [2024-11-20 05:33:42.718516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.718539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.470 [2024-11-20 05:33:42.718555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.718577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.470 [2024-11-20 05:33:42.718592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.718615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.470 [2024-11-20 05:33:42.718630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.718694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.470 [2024-11-20 05:33:42.718716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.718739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.470 [2024-11-20 05:33:42.718755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.718777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.470 [2024-11-20 05:33:42.718793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.718815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.470 [2024-11-20 05:33:42.718831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.718853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.470 [2024-11-20 05:33:42.718869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.718891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.470 [2024-11-20 05:33:42.718923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.718948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.470 [2024-11-20 05:33:42.718964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.718986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.470 [2024-11-20 05:33:42.719010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.719043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.470 [2024-11-20 05:33:42.719060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.719110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.470 [2024-11-20 05:33:42.719128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.719150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.470 [2024-11-20 05:33:42.719176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.719199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.470 [2024-11-20 05:33:42.719215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.719237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.470 [2024-11-20 05:33:42.719253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.719275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.470 [2024-11-20 05:33:42.719291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.719312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.470 [2024-11-20 05:33:42.719331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.719354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.470 [2024-11-20 05:33:42.719370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.719393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.470 [2024-11-20 05:33:42.719409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.719431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.470 [2024-11-20 05:33:42.719446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.719468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.470 [2024-11-20 05:33:42.719483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.719505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.470 [2024-11-20 05:33:42.719521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:03.470 [2024-11-20 05:33:42.719543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.471 [2024-11-20 05:33:42.719558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:42.719587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.471 [2024-11-20 05:33:42.719604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:42.719627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.471 [2024-11-20 05:33:42.719642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:42.720533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.471 [2024-11-20 05:33:42.720568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:42.720604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.471 [2024-11-20 05:33:42.720622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:42.720652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.471 [2024-11-20 05:33:42.720668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:42.720698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.471 [2024-11-20 05:33:42.720716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:42.720746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.471 [2024-11-20 05:33:42.720762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:42.720791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.471 [2024-11-20 05:33:42.720807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:42.720835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.471 [2024-11-20 05:33:42.720851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:42.720880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.471 [2024-11-20 05:33:42.720899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:42.720971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.471 [2024-11-20 05:33:42.720993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:42.721037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.471 [2024-11-20 05:33:42.721067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:42.721097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.471 [2024-11-20 05:33:42.721126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:42.721157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.471 [2024-11-20 05:33:42.721174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:42.721202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.471 [2024-11-20 05:33:42.721218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:42.721248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.471 [2024-11-20 05:33:42.721263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:42.721292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.471 [2024-11-20 05:33:42.721307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:42.721336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.471 [2024-11-20 05:33:42.721351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:42.721381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.471 [2024-11-20 05:33:42.721397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:03.471 7852.22 IOPS, 30.67 MiB/s [2024-11-20T05:34:17.984Z] 7525.04 IOPS, 29.39 MiB/s [2024-11-20T05:34:17.984Z] 7224.04 IOPS, 28.22 MiB/s [2024-11-20T05:34:17.984Z] 6946.19 IOPS, 27.13 MiB/s [2024-11-20T05:34:17.984Z] 6688.93 IOPS, 26.13 MiB/s [2024-11-20T05:34:17.984Z] 6450.04 IOPS, 25.20 MiB/s [2024-11-20T05:34:17.984Z] 6227.62 IOPS, 24.33 MiB/s [2024-11-20T05:34:17.984Z] 6076.00 IOPS, 23.73 MiB/s [2024-11-20T05:34:17.984Z] 6115.65 IOPS, 23.89 MiB/s [2024-11-20T05:34:17.984Z] 6175.69 IOPS, 24.12 MiB/s [2024-11-20T05:34:17.984Z] 6231.79 IOPS, 24.34 MiB/s [2024-11-20T05:34:17.984Z] 6292.68 IOPS, 24.58 MiB/s [2024-11-20T05:34:17.984Z] 6342.26 IOPS, 24.77 MiB/s [2024-11-20T05:34:17.984Z] 6390.08 IOPS, 24.96 MiB/s [2024-11-20T05:34:17.984Z] [2024-11-20 05:33:56.914584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.471 [2024-11-20 05:33:56.914674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:56.914740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.471 [2024-11-20 05:33:56.914764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:56.914788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.471 [2024-11-20 05:33:56.914804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:56.914826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.471 [2024-11-20 05:33:56.914842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:56.914864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.471 [2024-11-20 05:33:56.914925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:56.914953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.471 [2024-11-20 05:33:56.914969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:56.914991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.471 [2024-11-20 05:33:56.915006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:56.915029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.471 [2024-11-20 05:33:56.915056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:56.915086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.471 [2024-11-20 05:33:56.915103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:56.915125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.471 [2024-11-20 05:33:56.915140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:56.915162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.471 [2024-11-20 05:33:56.915177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:56.915199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.471 [2024-11-20 05:33:56.915214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:56.915235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.471 [2024-11-20 05:33:56.915250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:56.915272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.471 [2024-11-20 05:33:56.915287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:56.915309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.471 [2024-11-20 05:33:56.915324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:03.471 [2024-11-20 05:33:56.915346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.471 [2024-11-20 05:33:56.915362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.915416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.915451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.915472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.915487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.915503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.915518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.915533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.915547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.915563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.915577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.915593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.915607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.915622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.915636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.915652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.915666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.915682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.915696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.915712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.915725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.915741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.915755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.915771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.915785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.915800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.915814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.915830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.915872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.915890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.472 [2024-11-20 05:33:56.915920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.915939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.472 [2024-11-20 05:33:56.915954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.915970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.472 [2024-11-20 05:33:56.915984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.916001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.472 [2024-11-20 05:33:56.916015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.916032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.472 [2024-11-20 05:33:56.916057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.916078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.472 [2024-11-20 05:33:56.916093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.916109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.472 [2024-11-20 05:33:56.916122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.916138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.472 [2024-11-20 05:33:56.916152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.916167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.916181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.916197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.916211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.916227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.916241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.916257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.916271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.916296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.916311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.916327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.916341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.916356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.916370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.916385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.916399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.916415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.916429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.916444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.916458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.916474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.916487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.916503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.916517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.916533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.916547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.916563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.916576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.916592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.916607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.916622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.916636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.916651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.472 [2024-11-20 05:33:56.916671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.472 [2024-11-20 05:33:56.916687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.473 [2024-11-20 05:33:56.916702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.916717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.473 [2024-11-20 05:33:56.916731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.916747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.473 [2024-11-20 05:33:56.916761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.916776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.473 [2024-11-20 05:33:56.916790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.916806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:62632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.473 [2024-11-20 05:33:56.916819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.916835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.473 [2024-11-20 05:33:56.916849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.916865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.473 [2024-11-20 05:33:56.916879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.916894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.473 [2024-11-20 05:33:56.916926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.916944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.473 [2024-11-20 05:33:56.916958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.916975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:62672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.473 [2024-11-20 05:33:56.916989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.473 [2024-11-20 05:33:56.917019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.473 [2024-11-20 05:33:56.917065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.473 [2024-11-20 05:33:56.917107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.473 [2024-11-20 05:33:56.917136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.473 [2024-11-20 05:33:56.917165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.473 [2024-11-20 05:33:56.917195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.473 [2024-11-20 05:33:56.917225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.473 [2024-11-20 05:33:56.917254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.473 [2024-11-20 05:33:56.917283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.473 [2024-11-20 05:33:56.917314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.473 [2024-11-20 05:33:56.917343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.473 [2024-11-20 05:33:56.917372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.473 [2024-11-20 05:33:56.917401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.473 [2024-11-20 05:33:56.917430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.473 [2024-11-20 05:33:56.917465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.473 [2024-11-20 05:33:56.917495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.473 [2024-11-20 05:33:56.917525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.473 [2024-11-20 05:33:56.917554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.473 [2024-11-20 05:33:56.917584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.473 [2024-11-20 05:33:56.917613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:62696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.473 [2024-11-20 05:33:56.917642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.473 [2024-11-20 05:33:56.917671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.473 [2024-11-20 05:33:56.917701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.473 [2024-11-20 05:33:56.917730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.473 [2024-11-20 05:33:56.917759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.473 [2024-11-20 05:33:56.917788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.473 [2024-11-20 05:33:56.917817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.473 [2024-11-20 05:33:56.917852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.473 [2024-11-20 05:33:56.917882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.473 [2024-11-20 05:33:56.917929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.473 [2024-11-20 05:33:56.917946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.474 [2024-11-20 05:33:56.917960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.917976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.474 [2024-11-20 05:33:56.917989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.918005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.474 [2024-11-20 05:33:56.918019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.918039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.474 [2024-11-20 05:33:56.918064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.918082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.474 [2024-11-20 05:33:56.918096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.918112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.474 [2024-11-20 05:33:56.918126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.918141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.474 [2024-11-20 05:33:56.918155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.918170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.474 [2024-11-20 05:33:56.918190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.918206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.474 [2024-11-20 05:33:56.918220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.918236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.474 [2024-11-20 05:33:56.918250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.918273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.474 [2024-11-20 05:33:56.918288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.918303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.474 [2024-11-20 05:33:56.918317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.918332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.474 [2024-11-20 05:33:56.918346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.918369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.474 [2024-11-20 05:33:56.918391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.918417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.474 [2024-11-20 05:33:56.918435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.918452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.474 [2024-11-20 05:33:56.918472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.918497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.474 [2024-11-20 05:33:56.918520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.918540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.474 [2024-11-20 05:33:56.918554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.918570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.474 [2024-11-20 05:33:56.918584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.918599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.474 [2024-11-20 05:33:56.918613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.918629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.474 [2024-11-20 05:33:56.918643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.918658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.474 [2024-11-20 05:33:56.918672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.918688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.474 [2024-11-20 05:33:56.918711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.918727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.474 [2024-11-20 05:33:56.918744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.918760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.474 [2024-11-20 05:33:56.918773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.918789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.474 [2024-11-20 05:33:56.918803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.918818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.474 [2024-11-20 05:33:56.918832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.918847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.474 [2024-11-20 05:33:56.918861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.918876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.474 [2024-11-20 05:33:56.918890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.918922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.474 [2024-11-20 05:33:56.918939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.918955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.474 [2024-11-20 05:33:56.918969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.918984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.474 [2024-11-20 05:33:56.918998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.919013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151290 is same with the state(6) to be set 00:24:03.474 [2024-11-20 05:33:56.919031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.474 [2024-11-20 05:33:56.919050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.474 [2024-11-20 05:33:56.919068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62928 len:8 PRP1 0x0 PRP2 0x0 00:24:03.474 [2024-11-20 05:33:56.919084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.474 [2024-11-20 05:33:56.920404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:03.474 [2024-11-20 05:33:56.920502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.475 [2024-11-20 05:33:56.920543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.475 [2024-11-20 05:33:56.920581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c21d0 (9): Bad file descriptor 00:24:03.475 [2024-11-20 05:33:56.921040] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.475 [2024-11-20 05:33:56.921084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c21d0 with addr=10.0.0.3, port=4421 00:24:03.475 [2024-11-20 05:33:56.921103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c21d0 is same with the state(6) to be set 00:24:03.475 [2024-11-20 05:33:56.921170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c21d0 (9): Bad file descriptor 00:24:03.475 [2024-11-20 05:33:56.921209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:03.475 [2024-11-20 05:33:56.921227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:03.475 [2024-11-20 05:33:56.921244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:03.475 [2024-11-20 05:33:56.921259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:03.475 [2024-11-20 05:33:56.921274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:03.475 6412.08 IOPS, 25.05 MiB/s [2024-11-20T05:34:17.988Z] 6458.00 IOPS, 25.23 MiB/s [2024-11-20T05:34:17.988Z] 6482.56 IOPS, 25.32 MiB/s [2024-11-20T05:34:17.988Z] 6522.10 IOPS, 25.48 MiB/s [2024-11-20T05:34:17.988Z] 6562.44 IOPS, 25.63 MiB/s [2024-11-20T05:34:17.988Z] 6565.24 IOPS, 25.65 MiB/s [2024-11-20T05:34:17.988Z] 6590.79 IOPS, 25.75 MiB/s [2024-11-20T05:34:17.988Z] 6618.64 IOPS, 25.85 MiB/s [2024-11-20T05:34:17.988Z] 6654.58 IOPS, 25.99 MiB/s [2024-11-20T05:34:17.988Z] 6660.57 IOPS, 26.02 MiB/s [2024-11-20T05:34:17.988Z] 6610.64 IOPS, 25.82 MiB/s [2024-11-20T05:34:17.988Z] [2024-11-20 05:34:07.034666] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:03.475 6603.62 IOPS, 25.80 MiB/s [2024-11-20T05:34:17.988Z] 6622.57 IOPS, 25.87 MiB/s [2024-11-20T05:34:17.988Z] 6641.16 IOPS, 25.94 MiB/s [2024-11-20T05:34:17.988Z] 6661.53 IOPS, 26.02 MiB/s [2024-11-20T05:34:17.988Z] 6693.42 IOPS, 26.15 MiB/s [2024-11-20T05:34:17.988Z] 6724.55 IOPS, 26.27 MiB/s [2024-11-20T05:34:17.988Z] 6753.06 IOPS, 26.38 MiB/s [2024-11-20T05:34:17.988Z] 6782.56 IOPS, 26.49 MiB/s [2024-11-20T05:34:17.988Z] 6810.30 IOPS, 26.60 MiB/s [2024-11-20T05:34:17.988Z] 6839.32 IOPS, 26.72 MiB/s [2024-11-20T05:34:17.988Z] Received shutdown signal, test time was about 57.798099 seconds 00:24:03.475 00:24:03.475 Latency(us) 00:24:03.475 [2024-11-20T05:34:17.988Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.475 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:03.475 Verification LBA range: start 0x0 length 0x4000 00:24:03.475 Nvme0n1 : 57.80 6860.41 26.80 0.00 0.00 18624.99 644.19 7046430.72 00:24:03.475 [2024-11-20T05:34:17.988Z] =================================================================================================================== 00:24:03.475 [2024-11-20T05:34:17.988Z] Total : 6860.41 26.80 0.00 0.00 18624.99 644.19 7046430.72 00:24:03.475 05:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:03.734 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:24:03.734 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:03.734 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:24:03.734 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:03.734 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:24:03.734 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:03.734 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:24:03.734 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:03.734 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:03.734 rmmod nvme_tcp 00:24:03.734 rmmod nvme_fabrics 00:24:03.734 rmmod nvme_keyring 00:24:03.734 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:03.993 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:24:03.993 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:24:03.993 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 81248 ']' 00:24:03.993 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 81248 00:24:03.993 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' -z 81248 ']' 00:24:03.993 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # kill -0 81248 00:24:03.993 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # uname 00:24:03.993 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:03.993 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81248 00:24:03.993 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:03.993 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:03.993 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81248' 00:24:03.993 killing process with pid 81248 00:24:03.993 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@971 -- # kill 81248 00:24:03.993 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@976 -- # wait 81248 00:24:03.993 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:03.993 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:03.993 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:03.993 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:24:03.993 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:03.993 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:24:03.993 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:24:03.993 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:03.993 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:03.993 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:03.993 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:03.993 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:03.993 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:04.251 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:04.251 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:04.251 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:04.251 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:04.251 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:04.251 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:04.252 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:04.252 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:04.252 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:04.252 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:04.252 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.252 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.252 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.252 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:24:04.252 00:24:04.252 real 1m3.752s 00:24:04.252 user 2m58.890s 00:24:04.252 sys 0m19.221s 00:24:04.252 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:04.252 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:04.252 ************************************ 00:24:04.252 END TEST nvmf_host_multipath 00:24:04.252 ************************************ 00:24:04.252 05:34:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:04.252 05:34:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:04.252 05:34:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:04.252 05:34:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.252 ************************************ 00:24:04.252 START TEST nvmf_timeout 00:24:04.252 ************************************ 00:24:04.252 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:04.511 * Looking for test storage... 00:24:04.511 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:04.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.511 --rc genhtml_branch_coverage=1 00:24:04.511 --rc genhtml_function_coverage=1 00:24:04.511 --rc genhtml_legend=1 00:24:04.511 --rc geninfo_all_blocks=1 00:24:04.511 --rc geninfo_unexecuted_blocks=1 00:24:04.511 00:24:04.511 ' 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:04.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.511 --rc genhtml_branch_coverage=1 00:24:04.511 --rc genhtml_function_coverage=1 00:24:04.511 --rc genhtml_legend=1 00:24:04.511 --rc geninfo_all_blocks=1 00:24:04.511 --rc geninfo_unexecuted_blocks=1 00:24:04.511 00:24:04.511 ' 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:04.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.511 --rc genhtml_branch_coverage=1 00:24:04.511 --rc genhtml_function_coverage=1 00:24:04.511 --rc genhtml_legend=1 00:24:04.511 --rc geninfo_all_blocks=1 00:24:04.511 --rc geninfo_unexecuted_blocks=1 00:24:04.511 00:24:04.511 ' 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:04.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.511 --rc genhtml_branch_coverage=1 00:24:04.511 --rc genhtml_function_coverage=1 00:24:04.511 --rc genhtml_legend=1 00:24:04.511 --rc geninfo_all_blocks=1 00:24:04.511 --rc geninfo_unexecuted_blocks=1 00:24:04.511 00:24:04.511 ' 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.511 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:04.512 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:04.512 Cannot find device "nvmf_init_br" 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:04.512 Cannot find device "nvmf_init_br2" 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:04.512 Cannot find device "nvmf_tgt_br" 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:24:04.512 05:34:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:04.512 Cannot find device "nvmf_tgt_br2" 00:24:04.512 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:24:04.512 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:04.512 Cannot find device "nvmf_init_br" 00:24:04.512 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:24:04.512 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:04.771 Cannot find device "nvmf_init_br2" 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:04.771 Cannot find device "nvmf_tgt_br" 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:04.771 Cannot find device "nvmf_tgt_br2" 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:04.771 Cannot find device "nvmf_br" 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:04.771 Cannot find device "nvmf_init_if" 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:04.771 Cannot find device "nvmf_init_if2" 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:04.771 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:04.771 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:04.771 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:05.030 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:05.030 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:05.030 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:05.030 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:05.030 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:05.030 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:05.030 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:05.030 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:05.030 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:05.030 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:05.030 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:05.030 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:05.030 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:05.030 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:05.030 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:05.030 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:24:05.030 00:24:05.030 --- 10.0.0.3 ping statistics --- 00:24:05.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.030 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:24:05.030 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:05.031 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:05.031 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:24:05.031 00:24:05.031 --- 10.0.0.4 ping statistics --- 00:24:05.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.031 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:24:05.031 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:05.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:24:05.031 00:24:05.031 --- 10.0.0.1 ping statistics --- 00:24:05.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.031 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:24:05.031 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:05.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:24:05.031 00:24:05.031 --- 10.0.0.2 ping statistics --- 00:24:05.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.031 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:24:05.031 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.031 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:24:05.031 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:05.031 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:05.031 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:05.031 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:05.031 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:05.031 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:05.031 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:05.031 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:24:05.031 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:05.031 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:05.031 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:05.031 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=82471 00:24:05.031 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:05.031 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 82471 00:24:05.031 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 82471 ']' 00:24:05.031 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.031 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:05.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.031 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.031 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:05.031 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:05.031 [2024-11-20 05:34:19.480927] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:24:05.031 [2024-11-20 05:34:19.481034] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.289 [2024-11-20 05:34:19.628706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:05.289 [2024-11-20 05:34:19.676598] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.289 [2024-11-20 05:34:19.676675] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.289 [2024-11-20 05:34:19.676697] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:05.289 [2024-11-20 05:34:19.676712] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:05.289 [2024-11-20 05:34:19.676724] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.289 [2024-11-20 05:34:19.677701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.289 [2024-11-20 05:34:19.677722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.289 [2024-11-20 05:34:19.713838] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:05.289 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:05.289 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:24:05.289 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:05.289 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:05.289 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:05.548 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.548 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:05.548 05:34:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:05.807 [2024-11-20 05:34:20.092638] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.807 05:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:06.066 Malloc0 00:24:06.066 05:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:06.325 05:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:06.584 05:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:06.842 [2024-11-20 05:34:21.250821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:06.842 05:34:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82513 00:24:06.842 05:34:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:06.842 05:34:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82513 /var/tmp/bdevperf.sock 00:24:06.842 05:34:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 82513 ']' 00:24:06.842 05:34:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:06.843 05:34:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:06.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:06.843 05:34:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:06.843 05:34:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:06.843 05:34:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:06.843 [2024-11-20 05:34:21.331340] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:24:06.843 [2024-11-20 05:34:21.331449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82513 ] 00:24:07.101 [2024-11-20 05:34:21.482652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.102 [2024-11-20 05:34:21.523192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:07.102 [2024-11-20 05:34:21.556161] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:07.102 05:34:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:07.102 05:34:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:24:07.102 05:34:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:07.670 05:34:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:07.927 NVMe0n1 00:24:07.927 05:34:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82529 00:24:07.927 05:34:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:24:07.927 05:34:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:08.202 Running I/O for 10 seconds... 00:24:09.178 05:34:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:09.439 6452.00 IOPS, 25.20 MiB/s [2024-11-20T05:34:23.952Z] [2024-11-20 05:34:23.717312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.439 [2024-11-20 05:34:23.717375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.439 [2024-11-20 05:34:23.717400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.439 [2024-11-20 05:34:23.717411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.439 [2024-11-20 05:34:23.717424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.439 [2024-11-20 05:34:23.717434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.439 [2024-11-20 05:34:23.717445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.439 [2024-11-20 05:34:23.717455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.439 [2024-11-20 05:34:23.717466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.439 [2024-11-20 05:34:23.717477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.439 [2024-11-20 05:34:23.717488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.439 [2024-11-20 05:34:23.717497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.439 [2024-11-20 05:34:23.717509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.439 [2024-11-20 05:34:23.717518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.439 [2024-11-20 05:34:23.717529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.439 [2024-11-20 05:34:23.717538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.439 [2024-11-20 05:34:23.717550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.439 [2024-11-20 05:34:23.717559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.439 [2024-11-20 05:34:23.717570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.439 [2024-11-20 05:34:23.717579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.439 [2024-11-20 05:34:23.717590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.439 [2024-11-20 05:34:23.717599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.439 [2024-11-20 05:34:23.717610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.439 [2024-11-20 05:34:23.717619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.439 [2024-11-20 05:34:23.717630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.439 [2024-11-20 05:34:23.717640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.439 [2024-11-20 05:34:23.717651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.439 [2024-11-20 05:34:23.717660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.439 [2024-11-20 05:34:23.717671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.439 [2024-11-20 05:34:23.717680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.717692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.717701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.717712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.717721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.717733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.717742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.717753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.717762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.717774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.717783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.717794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.717803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.717814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.717823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.717834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.717843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.717854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.717863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.717874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.717883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.717894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.717916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.717940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.717950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.717961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.717970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.717982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.717992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.718003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.718012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.718029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.718044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.718063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.718073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.718084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.718094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.718106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.718115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.718126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.718135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.718146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.718155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.718166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.718175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.718186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.718195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.718206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.718216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.718227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.718236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.718246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.718256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.718267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.718276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.718287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.718296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.718307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.718316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.718327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.718337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.718348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.718357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.718369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.440 [2024-11-20 05:34:23.718378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.718389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.440 [2024-11-20 05:34:23.718398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.718410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.440 [2024-11-20 05:34:23.718419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.718430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.440 [2024-11-20 05:34:23.718440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.718451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.440 [2024-11-20 05:34:23.718460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.440 [2024-11-20 05:34:23.718473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.441 [2024-11-20 05:34:23.718482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.718493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.441 [2024-11-20 05:34:23.718502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.718513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.441 [2024-11-20 05:34:23.718523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.718534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.441 [2024-11-20 05:34:23.718543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.718554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.441 [2024-11-20 05:34:23.718563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.718574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.441 [2024-11-20 05:34:23.718583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.718594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.441 [2024-11-20 05:34:23.718603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.718614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.441 [2024-11-20 05:34:23.718623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.718635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.441 [2024-11-20 05:34:23.718644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.718655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.441 [2024-11-20 05:34:23.718664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.718675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.441 [2024-11-20 05:34:23.718684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.718696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.441 [2024-11-20 05:34:23.718705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.718716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.441 [2024-11-20 05:34:23.718725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.718735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.441 [2024-11-20 05:34:23.718745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.718757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.441 [2024-11-20 05:34:23.718767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.718778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.441 [2024-11-20 05:34:23.718788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.718799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.441 [2024-11-20 05:34:23.718809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.718820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.441 [2024-11-20 05:34:23.718829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.718840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.441 [2024-11-20 05:34:23.718849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.718860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.441 [2024-11-20 05:34:23.718870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.718881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.441 [2024-11-20 05:34:23.718890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.718911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.441 [2024-11-20 05:34:23.718922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.718934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.441 [2024-11-20 05:34:23.718944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.718955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:66160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.441 [2024-11-20 05:34:23.718964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.718975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.441 [2024-11-20 05:34:23.718984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.718995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.441 [2024-11-20 05:34:23.719004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.719016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.441 [2024-11-20 05:34:23.719031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.719048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.441 [2024-11-20 05:34:23.719069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.719080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.441 [2024-11-20 05:34:23.719089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.719101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.441 [2024-11-20 05:34:23.719110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.719122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.441 [2024-11-20 05:34:23.719131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.719142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.441 [2024-11-20 05:34:23.719153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.719164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.441 [2024-11-20 05:34:23.719174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.441 [2024-11-20 05:34:23.719185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.441 [2024-11-20 05:34:23.719194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.442 [2024-11-20 05:34:23.719254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:66424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:66440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:66448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.442 [2024-11-20 05:34:23.719966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.442 [2024-11-20 05:34:23.719977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.443 [2024-11-20 05:34:23.719987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.443 [2024-11-20 05:34:23.719998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.443 [2024-11-20 05:34:23.720007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.443 [2024-11-20 05:34:23.720020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.443 [2024-11-20 05:34:23.720036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.443 [2024-11-20 05:34:23.720051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:66512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.443 [2024-11-20 05:34:23.720060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.443 [2024-11-20 05:34:23.720072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.443 [2024-11-20 05:34:23.720081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.443 [2024-11-20 05:34:23.720092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:66528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.443 [2024-11-20 05:34:23.720101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.443 [2024-11-20 05:34:23.720112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6c010 is same with the state(6) to be set 00:24:09.443 [2024-11-20 05:34:23.720124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:09.443 [2024-11-20 05:34:23.720132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:09.443 [2024-11-20 05:34:23.720140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66536 len:8 PRP1 0x0 PRP2 0x0 00:24:09.443 [2024-11-20 05:34:23.720149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.443 [2024-11-20 05:34:23.720273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.443 [2024-11-20 05:34:23.720312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.443 [2024-11-20 05:34:23.720327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.443 [2024-11-20 05:34:23.720337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.443 [2024-11-20 05:34:23.720347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.443 [2024-11-20 05:34:23.720356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.443 [2024-11-20 05:34:23.720366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.443 [2024-11-20 05:34:23.720375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.443 [2024-11-20 05:34:23.720385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfee50 is same with the state(6) to be set 00:24:09.443 [2024-11-20 05:34:23.720611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:09.443 [2024-11-20 05:34:23.720642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfee50 (9): Bad file descriptor 00:24:09.443 [2024-11-20 05:34:23.720746] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.443 [2024-11-20 05:34:23.720773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cfee50 with addr=10.0.0.3, port=4420 00:24:09.443 [2024-11-20 05:34:23.720785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfee50 is same with the state(6) to be set 00:24:09.443 [2024-11-20 05:34:23.720803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfee50 (9): Bad file descriptor 00:24:09.443 [2024-11-20 05:34:23.720819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:09.443 [2024-11-20 05:34:23.720829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:09.443 [2024-11-20 05:34:23.720839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:09.443 [2024-11-20 05:34:23.720849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:09.443 [2024-11-20 05:34:23.720860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:09.443 05:34:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:24:10.946 4122.00 IOPS, 16.10 MiB/s [2024-11-20T05:34:26.026Z] 2748.00 IOPS, 10.73 MiB/s [2024-11-20T05:34:26.026Z] [2024-11-20 05:34:25.721171] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.513 [2024-11-20 05:34:25.721252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cfee50 with addr=10.0.0.3, port=4420 00:24:11.513 [2024-11-20 05:34:25.721269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfee50 is same with the state(6) to be set 00:24:11.513 [2024-11-20 05:34:25.721297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfee50 (9): Bad file descriptor 00:24:11.513 [2024-11-20 05:34:25.721317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:11.513 [2024-11-20 05:34:25.721328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:11.513 [2024-11-20 05:34:25.721339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:11.513 [2024-11-20 05:34:25.721350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:11.514 [2024-11-20 05:34:25.721363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:11.514 05:34:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:24:11.514 05:34:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:11.514 05:34:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:11.772 05:34:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:24:11.772 05:34:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:24:11.772 05:34:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:11.772 05:34:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:12.289 2061.00 IOPS, 8.05 MiB/s [2024-11-20T05:34:26.802Z] 05:34:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:24:12.289 05:34:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:24:13.225 1648.80 IOPS, 6.44 MiB/s [2024-11-20T05:34:27.738Z] [2024-11-20 05:34:27.721646] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:13.225 [2024-11-20 05:34:27.721730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cfee50 with addr=10.0.0.3, port=4420 00:24:13.225 [2024-11-20 05:34:27.721747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfee50 is same with the state(6) to be set 00:24:13.225 [2024-11-20 05:34:27.721778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfee50 (9): Bad file descriptor 00:24:13.225 [2024-11-20 05:34:27.721799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:13.225 [2024-11-20 05:34:27.721809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:13.225 [2024-11-20 05:34:27.721821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:13.225 [2024-11-20 05:34:27.721833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:13.225 [2024-11-20 05:34:27.721844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:15.097 1374.00 IOPS, 5.37 MiB/s [2024-11-20T05:34:29.868Z] 1177.71 IOPS, 4.60 MiB/s [2024-11-20T05:34:29.868Z] [2024-11-20 05:34:29.721955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:15.355 [2024-11-20 05:34:29.722028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:15.355 [2024-11-20 05:34:29.722041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:15.355 [2024-11-20 05:34:29.722051] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:24:15.355 [2024-11-20 05:34:29.722063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:16.290 1030.50 IOPS, 4.03 MiB/s 00:24:16.290 Latency(us) 00:24:16.290 [2024-11-20T05:34:30.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.290 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:16.290 Verification LBA range: start 0x0 length 0x4000 00:24:16.290 NVMe0n1 : 8.27 997.03 3.89 15.48 0.00 126222.36 3798.11 7015926.69 00:24:16.290 [2024-11-20T05:34:30.803Z] =================================================================================================================== 00:24:16.290 [2024-11-20T05:34:30.803Z] Total : 997.03 3.89 15.48 0.00 126222.36 3798.11 7015926.69 00:24:16.290 { 00:24:16.290 "results": [ 00:24:16.290 { 00:24:16.290 "job": "NVMe0n1", 00:24:16.290 "core_mask": "0x4", 00:24:16.290 "workload": "verify", 00:24:16.290 "status": "finished", 00:24:16.290 "verify_range": { 00:24:16.290 "start": 0, 00:24:16.290 "length": 16384 00:24:16.290 }, 00:24:16.290 "queue_depth": 128, 00:24:16.290 "io_size": 4096, 00:24:16.290 "runtime": 8.26858, 00:24:16.290 "iops": 997.0273009392181, 00:24:16.290 "mibps": 3.8946378942938207, 00:24:16.290 "io_failed": 128, 00:24:16.290 "io_timeout": 0, 00:24:16.290 "avg_latency_us": 126222.36037353949, 00:24:16.290 "min_latency_us": 3798.109090909091, 00:24:16.290 "max_latency_us": 7015926.69090909 00:24:16.290 } 00:24:16.290 ], 00:24:16.290 "core_count": 1 00:24:16.290 } 00:24:17.226 05:34:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:24:17.226 05:34:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:17.226 05:34:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:17.483 05:34:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:24:17.483 05:34:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:24:17.483 05:34:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:17.483 05:34:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:18.049 05:34:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:24:18.049 05:34:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 82529 00:24:18.049 05:34:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82513 00:24:18.049 05:34:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 82513 ']' 00:24:18.049 05:34:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 82513 00:24:18.049 05:34:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:24:18.049 05:34:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:18.049 05:34:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82513 00:24:18.049 05:34:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:18.049 05:34:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:18.049 killing process with pid 82513 00:24:18.049 05:34:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82513' 00:24:18.049 05:34:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 82513 00:24:18.049 Received shutdown signal, test time was about 10.000000 seconds 00:24:18.049 00:24:18.049 Latency(us) 00:24:18.049 [2024-11-20T05:34:32.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.049 [2024-11-20T05:34:32.562Z] =================================================================================================================== 00:24:18.049 [2024-11-20T05:34:32.562Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:18.049 05:34:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 82513 00:24:18.307 05:34:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:18.566 [2024-11-20 05:34:32.895799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:18.566 05:34:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82662 00:24:18.566 05:34:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:18.566 05:34:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82662 /var/tmp/bdevperf.sock 00:24:18.566 05:34:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 82662 ']' 00:24:18.566 05:34:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:18.566 05:34:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:18.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:18.566 05:34:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:18.566 05:34:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:18.566 05:34:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:18.566 [2024-11-20 05:34:32.982566] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:24:18.566 [2024-11-20 05:34:32.982691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82662 ] 00:24:18.824 [2024-11-20 05:34:33.137474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.824 [2024-11-20 05:34:33.170600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:18.824 [2024-11-20 05:34:33.200487] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:18.824 05:34:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:18.824 05:34:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:24:18.824 05:34:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:19.081 05:34:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:24:19.338 NVMe0n1 00:24:19.597 05:34:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82668 00:24:19.597 05:34:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:19.597 05:34:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:24:19.597 Running I/O for 10 seconds... 00:24:20.531 05:34:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:20.795 7393.00 IOPS, 28.88 MiB/s [2024-11-20T05:34:35.308Z] [2024-11-20 05:34:35.177515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f20a0 is same with the state(6) to be set 00:24:20.795 [2024-11-20 05:34:35.177855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f20a0 is same with the state(6) to be set 00:24:20.795 [2024-11-20 05:34:35.178135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f20a0 is same with the state(6) to be set 00:24:20.795 [2024-11-20 05:34:35.178356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.795 [2024-11-20 05:34:35.178584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.795 [2024-11-20 05:34:35.178738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.795 [2024-11-20 05:34:35.178864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.795 [2024-11-20 05:34:35.178885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:71992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.795 [2024-11-20 05:34:35.178896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.795 [2024-11-20 05:34:35.178926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.795 [2024-11-20 05:34:35.178938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.795 [2024-11-20 05:34:35.178951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.795 [2024-11-20 05:34:35.178961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.795 [2024-11-20 05:34:35.178974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.795 [2024-11-20 05:34:35.178984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.795 [2024-11-20 05:34:35.178996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.795 [2024-11-20 05:34:35.179007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.795 [2024-11-20 05:34:35.179019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.795 [2024-11-20 05:34:35.179029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.795 [2024-11-20 05:34:35.179042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.795 [2024-11-20 05:34:35.179052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.796 [2024-11-20 05:34:35.179074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.796 [2024-11-20 05:34:35.179096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.796 [2024-11-20 05:34:35.179117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.796 [2024-11-20 05:34:35.179139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.796 [2024-11-20 05:34:35.179161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.796 [2024-11-20 05:34:35.179193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.796 [2024-11-20 05:34:35.179215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.796 [2024-11-20 05:34:35.179238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.796 [2024-11-20 05:34:35.179262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.796 [2024-11-20 05:34:35.179285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.796 [2024-11-20 05:34:35.179308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.796 [2024-11-20 05:34:35.179331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.796 [2024-11-20 05:34:35.179353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.796 [2024-11-20 05:34:35.179375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.796 [2024-11-20 05:34:35.179398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.796 [2024-11-20 05:34:35.179420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.796 [2024-11-20 05:34:35.179443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.796 [2024-11-20 05:34:35.179465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.796 [2024-11-20 05:34:35.179489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.796 [2024-11-20 05:34:35.179511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.796 [2024-11-20 05:34:35.179532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.796 [2024-11-20 05:34:35.179554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.796 [2024-11-20 05:34:35.179577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.796 [2024-11-20 05:34:35.179599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.796 [2024-11-20 05:34:35.179622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.796 [2024-11-20 05:34:35.179645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.796 [2024-11-20 05:34:35.179667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.796 [2024-11-20 05:34:35.179690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.796 [2024-11-20 05:34:35.179711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.796 [2024-11-20 05:34:35.179734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.796 [2024-11-20 05:34:35.179756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.796 [2024-11-20 05:34:35.179778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.796 [2024-11-20 05:34:35.179800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.796 [2024-11-20 05:34:35.179822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.796 [2024-11-20 05:34:35.179844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.796 [2024-11-20 05:34:35.179880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.796 [2024-11-20 05:34:35.179913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.796 [2024-11-20 05:34:35.179937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.796 [2024-11-20 05:34:35.179959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.796 [2024-11-20 05:34:35.179972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-11-20 05:34:35.179982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.179994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-11-20 05:34:35.180004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-11-20 05:34:35.180026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-11-20 05:34:35.180049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.797 [2024-11-20 05:34:35.180073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.797 [2024-11-20 05:34:35.180096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.797 [2024-11-20 05:34:35.180118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.797 [2024-11-20 05:34:35.180140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.797 [2024-11-20 05:34:35.180162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.797 [2024-11-20 05:34:35.180185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.797 [2024-11-20 05:34:35.180214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.797 [2024-11-20 05:34:35.180236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.797 [2024-11-20 05:34:35.180259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.797 [2024-11-20 05:34:35.180281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.797 [2024-11-20 05:34:35.180303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.797 [2024-11-20 05:34:35.180325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.797 [2024-11-20 05:34:35.180348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.797 [2024-11-20 05:34:35.180372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.797 [2024-11-20 05:34:35.180395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.797 [2024-11-20 05:34:35.180418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.797 [2024-11-20 05:34:35.180440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.797 [2024-11-20 05:34:35.180463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.797 [2024-11-20 05:34:35.180485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.797 [2024-11-20 05:34:35.180507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.797 [2024-11-20 05:34:35.180530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.797 [2024-11-20 05:34:35.180551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.797 [2024-11-20 05:34:35.180574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.797 [2024-11-20 05:34:35.180596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-11-20 05:34:35.180618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-11-20 05:34:35.180641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-11-20 05:34:35.180663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-11-20 05:34:35.180685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-11-20 05:34:35.180707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-11-20 05:34:35.180729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-11-20 05:34:35.180753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-11-20 05:34:35.180776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.797 [2024-11-20 05:34:35.180798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.797 [2024-11-20 05:34:35.180821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.797 [2024-11-20 05:34:35.180843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.797 [2024-11-20 05:34:35.180865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.797 [2024-11-20 05:34:35.180877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.180887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.180899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.181438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.181512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.181568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.181718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.181772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.181827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.181988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.182047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.182101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.182232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.182353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.182421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.182611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.182682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.182823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.182879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.182953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.183090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.183167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.183224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.183350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.183412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.798 [2024-11-20 05:34:35.183531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.183547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.798 [2024-11-20 05:34:35.183557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.183570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.798 [2024-11-20 05:34:35.183580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.183592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.798 [2024-11-20 05:34:35.183602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.183614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.798 [2024-11-20 05:34:35.183624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.183638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.798 [2024-11-20 05:34:35.183647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.183659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.798 [2024-11-20 05:34:35.183669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.183681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.798 [2024-11-20 05:34:35.183691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.183703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.183713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.183725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.183734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.183746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.183757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.183769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.183779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.183791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.183800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.183812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.183824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.183837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.183850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.183876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.183887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.183900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.183923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.183936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.183946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.183958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.183969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.183981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.183991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.184003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.184013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.184026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.184037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.184049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.184059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.184070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.184080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.184093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.184103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.184115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.184125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.184138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.798 [2024-11-20 05:34:35.184148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.798 [2024-11-20 05:34:35.184160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf010 is same with the state(6) to be set 00:24:20.798 [2024-11-20 05:34:35.184178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.798 [2024-11-20 05:34:35.184187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.799 [2024-11-20 05:34:35.184195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72992 len:8 PRP1 0x0 PRP2 0x0 00:24:20.799 [2024-11-20 05:34:35.184205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.799 [2024-11-20 05:34:35.184423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.799 [2024-11-20 05:34:35.184442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.799 [2024-11-20 05:34:35.184461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.799 [2024-11-20 05:34:35.184472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.799 [2024-11-20 05:34:35.184482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.799 [2024-11-20 05:34:35.184492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.799 [2024-11-20 05:34:35.184503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.799 [2024-11-20 05:34:35.184513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.799 [2024-11-20 05:34:35.184523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51e50 is same with the state(6) to be set 00:24:20.799 [2024-11-20 05:34:35.184764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:20.799 [2024-11-20 05:34:35.184789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e51e50 (9): Bad file descriptor 00:24:20.799 [2024-11-20 05:34:35.184893] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.799 [2024-11-20 05:34:35.184935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e51e50 with addr=10.0.0.3, port=4420 00:24:20.799 [2024-11-20 05:34:35.184948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51e50 is same with the state(6) to be set 00:24:20.799 [2024-11-20 05:34:35.184968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e51e50 (9): Bad file descriptor 00:24:20.799 [2024-11-20 05:34:35.184985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:20.799 [2024-11-20 05:34:35.184996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:20.799 [2024-11-20 05:34:35.185008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:20.799 [2024-11-20 05:34:35.185019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:20.799 [2024-11-20 05:34:35.185030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:20.799 05:34:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:24:21.735 4498.50 IOPS, 17.57 MiB/s [2024-11-20T05:34:36.248Z] [2024-11-20 05:34:36.185184] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.735 [2024-11-20 05:34:36.185478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e51e50 with addr=10.0.0.3, port=4420 00:24:21.735 [2024-11-20 05:34:36.185631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51e50 is same with the state(6) to be set 00:24:21.735 [2024-11-20 05:34:36.185937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e51e50 (9): Bad file descriptor 00:24:21.735 [2024-11-20 05:34:36.186190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:21.735 [2024-11-20 05:34:36.186322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:21.735 [2024-11-20 05:34:36.186389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:21.735 [2024-11-20 05:34:36.186531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:21.735 [2024-11-20 05:34:36.186674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:21.735 05:34:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:21.994 [2024-11-20 05:34:36.450101] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:21.994 05:34:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82668 00:24:22.818 2999.00 IOPS, 11.71 MiB/s [2024-11-20T05:34:37.331Z] [2024-11-20 05:34:37.199996] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:24.767 2249.25 IOPS, 8.79 MiB/s [2024-11-20T05:34:40.212Z] 3246.80 IOPS, 12.68 MiB/s [2024-11-20T05:34:41.148Z] 3945.83 IOPS, 15.41 MiB/s [2024-11-20T05:34:42.080Z] 4456.43 IOPS, 17.41 MiB/s [2024-11-20T05:34:43.016Z] 4935.62 IOPS, 19.28 MiB/s [2024-11-20T05:34:44.391Z] 5391.22 IOPS, 21.06 MiB/s [2024-11-20T05:34:44.391Z] 5761.60 IOPS, 22.51 MiB/s 00:24:29.878 Latency(us) 00:24:29.878 [2024-11-20T05:34:44.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.878 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:29.878 Verification LBA range: start 0x0 length 0x4000 00:24:29.878 NVMe0n1 : 10.01 5766.90 22.53 0.00 0.00 22142.00 2249.08 3035150.89 00:24:29.878 [2024-11-20T05:34:44.391Z] =================================================================================================================== 00:24:29.878 [2024-11-20T05:34:44.391Z] Total : 5766.90 22.53 0.00 0.00 22142.00 2249.08 3035150.89 00:24:29.878 { 00:24:29.878 "results": [ 00:24:29.878 { 00:24:29.878 "job": "NVMe0n1", 00:24:29.878 "core_mask": "0x4", 00:24:29.878 "workload": "verify", 00:24:29.878 "status": "finished", 00:24:29.878 "verify_range": { 00:24:29.878 "start": 0, 00:24:29.878 "length": 16384 00:24:29.878 }, 00:24:29.878 "queue_depth": 128, 00:24:29.878 "io_size": 4096, 00:24:29.878 "runtime": 10.010235, 00:24:29.878 "iops": 5766.897580326536, 00:24:29.878 "mibps": 22.52694367315053, 00:24:29.878 "io_failed": 0, 00:24:29.878 "io_timeout": 0, 00:24:29.878 "avg_latency_us": 22141.997565007052, 00:24:29.878 "min_latency_us": 2249.0763636363636, 00:24:29.878 "max_latency_us": 3035150.8945454545 00:24:29.878 } 00:24:29.878 ], 00:24:29.878 "core_count": 1 00:24:29.878 } 00:24:29.878 05:34:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82773 00:24:29.878 05:34:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:24:29.878 05:34:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:29.878 Running I/O for 10 seconds... 00:24:30.816 05:34:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:30.816 6691.00 IOPS, 26.14 MiB/s [2024-11-20T05:34:45.329Z] [2024-11-20 05:34:45.302705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.816 [2024-11-20 05:34:45.302763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.816 [2024-11-20 05:34:45.302790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.816 [2024-11-20 05:34:45.302801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.816 [2024-11-20 05:34:45.302818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.816 [2024-11-20 05:34:45.302828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.816 [2024-11-20 05:34:45.302840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.816 [2024-11-20 05:34:45.302850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.816 [2024-11-20 05:34:45.302862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.816 [2024-11-20 05:34:45.302872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.816 [2024-11-20 05:34:45.302883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.816 [2024-11-20 05:34:45.302893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.816 [2024-11-20 05:34:45.302921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.816 [2024-11-20 05:34:45.302934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.816 [2024-11-20 05:34:45.302946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.816 [2024-11-20 05:34:45.302956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.816 [2024-11-20 05:34:45.302968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.816 [2024-11-20 05:34:45.302978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.816 [2024-11-20 05:34:45.302989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.816 [2024-11-20 05:34:45.302999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.816 [2024-11-20 05:34:45.303011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.816 [2024-11-20 05:34:45.303021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.816 [2024-11-20 05:34:45.303033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.816 [2024-11-20 05:34:45.303042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.816 [2024-11-20 05:34:45.303054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.816 [2024-11-20 05:34:45.303063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.816 [2024-11-20 05:34:45.303075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.816 [2024-11-20 05:34:45.303084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.816 [2024-11-20 05:34:45.303096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.816 [2024-11-20 05:34:45.303106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.816 [2024-11-20 05:34:45.303117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.816 [2024-11-20 05:34:45.303127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.816 [2024-11-20 05:34:45.303138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.816 [2024-11-20 05:34:45.303148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.816 [2024-11-20 05:34:45.303162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.816 [2024-11-20 05:34:45.303172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.816 [2024-11-20 05:34:45.303183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.816 [2024-11-20 05:34:45.303193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.816 [2024-11-20 05:34:45.303204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.816 [2024-11-20 05:34:45.303214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.816 [2024-11-20 05:34:45.303225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.816 [2024-11-20 05:34:45.303235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.816 [2024-11-20 05:34:45.303246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.816 [2024-11-20 05:34:45.303256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.816 [2024-11-20 05:34:45.303267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.816 [2024-11-20 05:34:45.303276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.816 [2024-11-20 05:34:45.303288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.816 [2024-11-20 05:34:45.303297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.816 [2024-11-20 05:34:45.303309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.816 [2024-11-20 05:34:45.303319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.816 [2024-11-20 05:34:45.303340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.816 [2024-11-20 05:34:45.303349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.816 [2024-11-20 05:34:45.303361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.816 [2024-11-20 05:34:45.303370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.816 [2024-11-20 05:34:45.303382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.816 [2024-11-20 05:34:45.303392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.816 [2024-11-20 05:34:45.303403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.816 [2024-11-20 05:34:45.303413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.816 [2024-11-20 05:34:45.303424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.303434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.303445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.303455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.303466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.303476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.303487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.303497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.303509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.303519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.303530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.303540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.303552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.303561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.303573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.303582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.303595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.303604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.303616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.303625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.303636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.303646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.303658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.303667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.303679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.303688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.303700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.303710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.303721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.303731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.303742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.303752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.303764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.303773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.303785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.303794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.303805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.303815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.303827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.303836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.303853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.303862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.303885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.303895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.303916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.303927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.303939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.303949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.303961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.303970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.303982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.303991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.304003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.304012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.304024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.304033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.304044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.304054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.304065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.304075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.304086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.304096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.304108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.304118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.304129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.304139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.304151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.304160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.304172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.304182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.304193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.304203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.304215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.304225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.304236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.304246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.304258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.304268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.817 [2024-11-20 05:34:45.304279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.817 [2024-11-20 05:34:45.304289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.304979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.304991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.305000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.305011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.305021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.305032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.305042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.305053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.305063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.305074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.305084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.305095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.305105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.305117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.305126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.818 [2024-11-20 05:34:45.305137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.818 [2024-11-20 05:34:45.305147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.819 [2024-11-20 05:34:45.305158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.819 [2024-11-20 05:34:45.305168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.819 [2024-11-20 05:34:45.305179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.819 [2024-11-20 05:34:45.305188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.819 [2024-11-20 05:34:45.305200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.819 [2024-11-20 05:34:45.305209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.819 [2024-11-20 05:34:45.305221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.819 [2024-11-20 05:34:45.305231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.819 [2024-11-20 05:34:45.305242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.819 [2024-11-20 05:34:45.305252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.819 [2024-11-20 05:34:45.305264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.819 [2024-11-20 05:34:45.305274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.819 [2024-11-20 05:34:45.305286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.819 [2024-11-20 05:34:45.305298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.819 [2024-11-20 05:34:45.305310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.819 [2024-11-20 05:34:45.305320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.819 [2024-11-20 05:34:45.305331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.819 [2024-11-20 05:34:45.305340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.819 [2024-11-20 05:34:45.305352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.819 [2024-11-20 05:34:45.305362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.819 [2024-11-20 05:34:45.305373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.819 [2024-11-20 05:34:45.305383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.819 [2024-11-20 05:34:45.305395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.819 [2024-11-20 05:34:45.305404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.819 [2024-11-20 05:34:45.305416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.819 [2024-11-20 05:34:45.305425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.819 [2024-11-20 05:34:45.305437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.819 [2024-11-20 05:34:45.305447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.819 [2024-11-20 05:34:45.305458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.819 [2024-11-20 05:34:45.305468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.819 [2024-11-20 05:34:45.305480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.819 [2024-11-20 05:34:45.305489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.819 [2024-11-20 05:34:45.305500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.819 [2024-11-20 05:34:45.305510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.819 [2024-11-20 05:34:45.305522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.819 [2024-11-20 05:34:45.305532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.819 [2024-11-20 05:34:45.305543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.819 [2024-11-20 05:34:45.305553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.819 [2024-11-20 05:34:45.305564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec0190 is same with the state(6) to be set 00:24:30.819 [2024-11-20 05:34:45.305576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.819 [2024-11-20 05:34:45.305584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.819 [2024-11-20 05:34:45.305593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64008 len:8 PRP1 0x0 PRP2 0x0 00:24:30.819 [2024-11-20 05:34:45.305602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.819 [2024-11-20 05:34:45.305720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.819 [2024-11-20 05:34:45.305744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.819 [2024-11-20 05:34:45.305760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.819 [2024-11-20 05:34:45.305771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.819 [2024-11-20 05:34:45.305781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.819 [2024-11-20 05:34:45.305791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.819 [2024-11-20 05:34:45.305801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.819 [2024-11-20 05:34:45.305811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.819 [2024-11-20 05:34:45.305820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51e50 is same with the state(6) to be set 00:24:30.819 [2024-11-20 05:34:45.306057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:30.819 [2024-11-20 05:34:45.306084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e51e50 (9): Bad file descriptor 00:24:30.819 [2024-11-20 05:34:45.306182] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.819 [2024-11-20 05:34:45.306204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e51e50 with addr=10.0.0.3, port=4420 00:24:30.819 [2024-11-20 05:34:45.306216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51e50 is same with the state(6) to be set 00:24:30.819 [2024-11-20 05:34:45.306235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e51e50 (9): Bad file descriptor 00:24:30.819 [2024-11-20 05:34:45.306252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:24:30.819 [2024-11-20 05:34:45.306262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:24:30.819 [2024-11-20 05:34:45.306273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:30.819 [2024-11-20 05:34:45.306284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:24:30.819 [2024-11-20 05:34:45.306295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:30.819 05:34:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:24:32.013 3937.00 IOPS, 15.38 MiB/s [2024-11-20T05:34:46.526Z] [2024-11-20 05:34:46.306458] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.013 [2024-11-20 05:34:46.306546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e51e50 with addr=10.0.0.3, port=4420 00:24:32.013 [2024-11-20 05:34:46.306566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51e50 is same with the state(6) to be set 00:24:32.013 [2024-11-20 05:34:46.306594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e51e50 (9): Bad file descriptor 00:24:32.013 [2024-11-20 05:34:46.306615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:24:32.013 [2024-11-20 05:34:46.306626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:24:32.013 [2024-11-20 05:34:46.306638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:32.013 [2024-11-20 05:34:46.306650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:24:32.013 [2024-11-20 05:34:46.306662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:32.949 2624.67 IOPS, 10.25 MiB/s [2024-11-20T05:34:47.462Z] [2024-11-20 05:34:47.306840] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.949 [2024-11-20 05:34:47.306933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e51e50 with addr=10.0.0.3, port=4420 00:24:32.949 [2024-11-20 05:34:47.306952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51e50 is same with the state(6) to be set 00:24:32.949 [2024-11-20 05:34:47.306981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e51e50 (9): Bad file descriptor 00:24:32.949 [2024-11-20 05:34:47.307002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:24:32.949 [2024-11-20 05:34:47.307014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:24:32.949 [2024-11-20 05:34:47.307027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:32.949 [2024-11-20 05:34:47.307040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:24:32.949 [2024-11-20 05:34:47.307052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:33.896 1968.50 IOPS, 7.69 MiB/s [2024-11-20T05:34:48.409Z] [2024-11-20 05:34:48.310844] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.896 [2024-11-20 05:34:48.310933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e51e50 with addr=10.0.0.3, port=4420 00:24:33.896 [2024-11-20 05:34:48.310952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51e50 is same with the state(6) to be set 00:24:33.896 [2024-11-20 05:34:48.311215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e51e50 (9): Bad file descriptor 00:24:33.896 [2024-11-20 05:34:48.311476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:24:33.897 [2024-11-20 05:34:48.311490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:24:33.897 [2024-11-20 05:34:48.311502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:33.897 [2024-11-20 05:34:48.311514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:24:33.897 [2024-11-20 05:34:48.311527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:33.897 05:34:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:34.155 [2024-11-20 05:34:48.584169] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:34.155 05:34:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82773 00:24:34.980 1574.80 IOPS, 6.15 MiB/s [2024-11-20T05:34:49.493Z] [2024-11-20 05:34:49.343054] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:24:36.848 2471.50 IOPS, 9.65 MiB/s [2024-11-20T05:34:52.296Z] 3230.43 IOPS, 12.62 MiB/s [2024-11-20T05:34:53.230Z] 3622.62 IOPS, 14.15 MiB/s [2024-11-20T05:34:54.163Z] 4016.22 IOPS, 15.69 MiB/s [2024-11-20T05:34:54.163Z] 4493.10 IOPS, 17.55 MiB/s 00:24:39.650 Latency(us) 00:24:39.650 [2024-11-20T05:34:54.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.650 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:39.650 Verification LBA range: start 0x0 length 0x4000 00:24:39.650 NVMe0n1 : 10.01 4496.70 17.57 3470.16 0.00 16032.54 789.41 3019898.88 00:24:39.650 [2024-11-20T05:34:54.163Z] =================================================================================================================== 00:24:39.650 [2024-11-20T05:34:54.163Z] Total : 4496.70 17.57 3470.16 0.00 16032.54 0.00 3019898.88 00:24:39.650 { 00:24:39.650 "results": [ 00:24:39.650 { 00:24:39.650 "job": "NVMe0n1", 00:24:39.650 "core_mask": "0x4", 00:24:39.650 "workload": "verify", 00:24:39.650 "status": "finished", 00:24:39.650 "verify_range": { 00:24:39.650 "start": 0, 00:24:39.650 "length": 16384 00:24:39.650 }, 00:24:39.650 "queue_depth": 128, 00:24:39.650 "io_size": 4096, 00:24:39.650 "runtime": 10.009333, 00:24:39.650 "iops": 4496.703226878354, 00:24:39.650 "mibps": 17.56524697999357, 00:24:39.650 "io_failed": 34734, 00:24:39.650 "io_timeout": 0, 00:24:39.650 "avg_latency_us": 16032.538140412436, 00:24:39.650 "min_latency_us": 789.4109090909091, 00:24:39.650 "max_latency_us": 3019898.88 00:24:39.650 } 00:24:39.650 ], 00:24:39.650 "core_count": 1 00:24:39.650 } 00:24:39.908 05:34:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82662 00:24:39.908 05:34:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 82662 ']' 00:24:39.908 05:34:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 82662 00:24:39.908 05:34:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:24:39.908 05:34:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:39.908 05:34:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82662 00:24:39.908 killing process with pid 82662 00:24:39.908 Received shutdown signal, test time was about 10.000000 seconds 00:24:39.908 00:24:39.908 Latency(us) 00:24:39.908 [2024-11-20T05:34:54.421Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.908 [2024-11-20T05:34:54.421Z] =================================================================================================================== 00:24:39.908 [2024-11-20T05:34:54.421Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:39.908 05:34:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:39.908 05:34:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:39.908 05:34:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82662' 00:24:39.908 05:34:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 82662 00:24:39.908 05:34:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 82662 00:24:39.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:39.908 05:34:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82888 00:24:39.908 05:34:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:24:39.908 05:34:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82888 /var/tmp/bdevperf.sock 00:24:39.908 05:34:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 82888 ']' 00:24:39.908 05:34:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:39.908 05:34:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:39.908 05:34:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:39.908 05:34:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:39.908 05:34:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:39.908 [2024-11-20 05:34:54.390703] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:24:39.908 [2024-11-20 05:34:54.390799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82888 ] 00:24:40.166 [2024-11-20 05:34:54.535981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.166 [2024-11-20 05:34:54.571490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:40.166 [2024-11-20 05:34:54.603295] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:40.425 05:34:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:40.425 05:34:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:24:40.425 05:34:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82896 00:24:40.425 05:34:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82888 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:24:40.425 05:34:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:24:40.684 05:34:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:40.942 NVMe0n1 00:24:40.942 05:34:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82938 00:24:40.942 05:34:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:40.942 05:34:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:24:41.200 Running I/O for 10 seconds... 00:24:42.135 05:34:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:42.395 13843.00 IOPS, 54.07 MiB/s [2024-11-20T05:34:56.908Z] [2024-11-20 05:34:56.855728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.395 [2024-11-20 05:34:56.855790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.395 [2024-11-20 05:34:56.855817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.395 [2024-11-20 05:34:56.855828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.395 [2024-11-20 05:34:56.855840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.395 [2024-11-20 05:34:56.855850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.395 [2024-11-20 05:34:56.855862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.395 [2024-11-20 05:34:56.855883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.395 [2024-11-20 05:34:56.855895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:89216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.395 [2024-11-20 05:34:56.855924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.395 [2024-11-20 05:34:56.855939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.395 [2024-11-20 05:34:56.855948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.395 [2024-11-20 05:34:56.855960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.395 [2024-11-20 05:34:56.855969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.395 [2024-11-20 05:34:56.855980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.395 [2024-11-20 05:34:56.855990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.395 [2024-11-20 05:34:56.856001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.395 [2024-11-20 05:34:56.856011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.395 [2024-11-20 05:34:56.856022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.395 [2024-11-20 05:34:56.856031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.395 [2024-11-20 05:34:56.856042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.395 [2024-11-20 05:34:56.856051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.395 [2024-11-20 05:34:56.856062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:47176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.395 [2024-11-20 05:34:56.856072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.395 [2024-11-20 05:34:56.856083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:88248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:56032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:57224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:67544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:126744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:56288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.856983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.856994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:123728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.857004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.857020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.857031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.857042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.396 [2024-11-20 05:34:56.857051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.396 [2024-11-20 05:34:56.857065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:73432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:81232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:43952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:123168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:115456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:29888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:119208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:118064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:55440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:37272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:56200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:60952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:69552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:47232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:36984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.857977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.857992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.858002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.858013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.858022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.858037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.858052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.397 [2024-11-20 05:34:56.858070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.397 [2024-11-20 05:34:56.858084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:111856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:119872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:31048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:34688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:41416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:67824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:50296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:32008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:29792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:119752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:127016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:92472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:52344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.858987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:45304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.858996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.859008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:45480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.859017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.859029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.859044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.859060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.398 [2024-11-20 05:34:56.859071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.398 [2024-11-20 05:34:56.859086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141c150 is same with the state(6) to be set 00:24:42.398 [2024-11-20 05:34:56.859100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:42.398 [2024-11-20 05:34:56.859113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:42.398 [2024-11-20 05:34:56.859127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35864 len:8 PRP1 0x0 PRP2 0x0 00:24:42.398 [2024-11-20 05:34:56.859142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.399 [2024-11-20 05:34:56.859301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.399 [2024-11-20 05:34:56.859326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.399 [2024-11-20 05:34:56.859341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.399 [2024-11-20 05:34:56.859351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.399 [2024-11-20 05:34:56.859361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.399 [2024-11-20 05:34:56.859373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.399 [2024-11-20 05:34:56.859387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.399 [2024-11-20 05:34:56.859399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.399 [2024-11-20 05:34:56.859414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aee50 is same with the state(6) to be set 00:24:42.399 [2024-11-20 05:34:56.859706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:42.399 [2024-11-20 05:34:56.859736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13aee50 (9): Bad file descriptor 00:24:42.399 [2024-11-20 05:34:56.859856] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.399 [2024-11-20 05:34:56.859894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aee50 with addr=10.0.0.3, port=4420 00:24:42.399 [2024-11-20 05:34:56.859926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aee50 is same with the state(6) to be set 00:24:42.399 [2024-11-20 05:34:56.859949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13aee50 (9): Bad file descriptor 00:24:42.399 [2024-11-20 05:34:56.859966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:24:42.399 [2024-11-20 05:34:56.859975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:24:42.399 [2024-11-20 05:34:56.859985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:42.399 [2024-11-20 05:34:56.859995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:24:42.399 [2024-11-20 05:34:56.860007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:42.399 05:34:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82938 00:24:44.265 8890.00 IOPS, 34.73 MiB/s [2024-11-20T05:34:59.037Z] 5926.67 IOPS, 23.15 MiB/s [2024-11-20T05:34:59.037Z] [2024-11-20 05:34:58.860326] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-11-20 05:34:58.860641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aee50 with addr=10.0.0.3, port=4420 00:24:44.524 [2024-11-20 05:34:58.860948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aee50 is same with the state(6) to be set 00:24:44.524 [2024-11-20 05:34:58.861172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13aee50 (9): Bad file descriptor 00:24:44.524 [2024-11-20 05:34:58.861412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:24:44.524 [2024-11-20 05:34:58.861720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:24:44.524 [2024-11-20 05:34:58.862051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:44.524 [2024-11-20 05:34:58.862212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:24:44.524 [2024-11-20 05:34:58.862419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:46.478 4445.00 IOPS, 17.36 MiB/s [2024-11-20T05:35:00.991Z] 3556.00 IOPS, 13.89 MiB/s [2024-11-20T05:35:00.991Z] [2024-11-20 05:35:00.862804] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.478 [2024-11-20 05:35:00.862870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aee50 with addr=10.0.0.3, port=4420 00:24:46.478 [2024-11-20 05:35:00.862887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aee50 is same with the state(6) to be set 00:24:46.478 [2024-11-20 05:35:00.862930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13aee50 (9): Bad file descriptor 00:24:46.478 [2024-11-20 05:35:00.862953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:24:46.478 [2024-11-20 05:35:00.862964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:24:46.478 [2024-11-20 05:35:00.862975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:46.478 [2024-11-20 05:35:00.862986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:24:46.478 [2024-11-20 05:35:00.862997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:48.373 2963.33 IOPS, 11.58 MiB/s [2024-11-20T05:35:02.886Z] 2540.00 IOPS, 9.92 MiB/s [2024-11-20T05:35:02.886Z] [2024-11-20 05:35:02.863066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:48.373 [2024-11-20 05:35:02.863144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:24:48.373 [2024-11-20 05:35:02.863158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:24:48.373 [2024-11-20 05:35:02.863168] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:24:48.373 [2024-11-20 05:35:02.863179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:24:49.566 2222.50 IOPS, 8.68 MiB/s 00:24:49.566 Latency(us) 00:24:49.566 [2024-11-20T05:35:04.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.566 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:24:49.566 NVMe0n1 : 8.28 2146.64 8.39 15.45 0.00 59146.16 8638.84 7015926.69 00:24:49.566 [2024-11-20T05:35:04.079Z] =================================================================================================================== 00:24:49.566 [2024-11-20T05:35:04.079Z] Total : 2146.64 8.39 15.45 0.00 59146.16 8638.84 7015926.69 00:24:49.566 { 00:24:49.566 "results": [ 00:24:49.566 { 00:24:49.566 "job": "NVMe0n1", 00:24:49.566 "core_mask": "0x4", 00:24:49.566 "workload": "randread", 00:24:49.566 "status": "finished", 00:24:49.566 "queue_depth": 128, 00:24:49.566 "io_size": 4096, 00:24:49.566 "runtime": 8.2827, 00:24:49.566 "iops": 2146.6430028855325, 00:24:49.566 "mibps": 8.385324230021611, 00:24:49.566 "io_failed": 128, 00:24:49.566 "io_timeout": 0, 00:24:49.566 "avg_latency_us": 59146.158490872534, 00:24:49.566 "min_latency_us": 8638.836363636363, 00:24:49.566 "max_latency_us": 7015926.69090909 00:24:49.566 } 00:24:49.566 ], 00:24:49.566 "core_count": 1 00:24:49.566 } 00:24:49.566 05:35:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:49.566 Attaching 5 probes... 00:24:49.566 1669.909598: reset bdev controller NVMe0 00:24:49.566 1670.005383: reconnect bdev controller NVMe0 00:24:49.566 3670.393490: reconnect delay bdev controller NVMe0 00:24:49.566 3670.421192: reconnect bdev controller NVMe0 00:24:49.566 5672.896461: reconnect delay bdev controller NVMe0 00:24:49.566 5672.920096: reconnect bdev controller NVMe0 00:24:49.566 7673.248856: reconnect delay bdev controller NVMe0 00:24:49.566 7673.273597: reconnect bdev controller NVMe0 00:24:49.566 05:35:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:24:49.566 05:35:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:24:49.566 05:35:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82896 00:24:49.566 05:35:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:49.566 05:35:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82888 00:24:49.566 05:35:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 82888 ']' 00:24:49.566 05:35:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 82888 00:24:49.566 05:35:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:24:49.566 05:35:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:49.566 05:35:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82888 00:24:49.566 killing process with pid 82888 00:24:49.566 Received shutdown signal, test time was about 8.348801 seconds 00:24:49.566 00:24:49.566 Latency(us) 00:24:49.566 [2024-11-20T05:35:04.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.566 [2024-11-20T05:35:04.079Z] =================================================================================================================== 00:24:49.566 [2024-11-20T05:35:04.079Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:49.566 05:35:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:49.566 05:35:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:49.566 05:35:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82888' 00:24:49.566 05:35:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 82888 00:24:49.566 05:35:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 82888 00:24:49.566 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:50.132 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:24:50.132 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:24:50.132 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:50.132 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:24:50.132 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:50.132 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:24:50.132 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:50.132 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:50.132 rmmod nvme_tcp 00:24:50.132 rmmod nvme_fabrics 00:24:50.132 rmmod nvme_keyring 00:24:50.132 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:50.132 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:24:50.132 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:24:50.132 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 82471 ']' 00:24:50.132 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 82471 00:24:50.132 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 82471 ']' 00:24:50.132 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 82471 00:24:50.133 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:24:50.133 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:50.133 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82471 00:24:50.133 killing process with pid 82471 00:24:50.133 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:50.133 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:50.133 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82471' 00:24:50.133 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 82471 00:24:50.133 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 82471 00:24:50.391 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:50.391 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:50.391 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:50.391 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:24:50.391 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:24:50.391 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:50.391 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:24:50.391 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:50.391 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:50.391 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:50.391 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:50.391 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:50.391 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:50.391 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:50.391 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:50.391 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:50.391 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:50.391 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:50.391 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:50.391 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:50.391 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:50.391 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:50.391 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:50.391 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.391 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:50.391 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.391 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:24:50.391 00:24:50.391 real 0m46.158s 00:24:50.391 user 2m15.676s 00:24:50.391 sys 0m5.906s 00:24:50.391 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:50.391 05:35:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:50.391 ************************************ 00:24:50.391 END TEST nvmf_timeout 00:24:50.391 ************************************ 00:24:50.650 05:35:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:24:50.650 05:35:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:24:50.650 00:24:50.650 real 5m18.157s 00:24:50.650 user 14m8.443s 00:24:50.650 sys 1m11.888s 00:24:50.650 05:35:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:50.650 05:35:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.650 ************************************ 00:24:50.650 END TEST nvmf_host 00:24:50.650 ************************************ 00:24:50.650 05:35:04 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:24:50.650 05:35:04 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:24:50.650 ************************************ 00:24:50.650 END TEST nvmf_tcp 00:24:50.650 ************************************ 00:24:50.650 00:24:50.650 real 13m15.665s 00:24:50.650 user 32m29.814s 00:24:50.650 sys 3m11.901s 00:24:50.650 05:35:04 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:50.650 05:35:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:50.650 05:35:05 -- spdk/autotest.sh@281 -- # [[ 1 -eq 0 ]] 00:24:50.650 05:35:05 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:24:50.650 05:35:05 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:50.650 05:35:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:50.650 05:35:05 -- common/autotest_common.sh@10 -- # set +x 00:24:50.650 ************************************ 00:24:50.650 START TEST nvmf_dif 00:24:50.650 ************************************ 00:24:50.650 05:35:05 nvmf_dif -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:24:50.650 * Looking for test storage... 00:24:50.650 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:50.650 05:35:05 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:50.650 05:35:05 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:24:50.650 05:35:05 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:50.909 05:35:05 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:24:50.909 05:35:05 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:50.909 05:35:05 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:50.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.909 --rc genhtml_branch_coverage=1 00:24:50.909 --rc genhtml_function_coverage=1 00:24:50.909 --rc genhtml_legend=1 00:24:50.909 --rc geninfo_all_blocks=1 00:24:50.909 --rc geninfo_unexecuted_blocks=1 00:24:50.909 00:24:50.909 ' 00:24:50.909 05:35:05 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:50.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.909 --rc genhtml_branch_coverage=1 00:24:50.909 --rc genhtml_function_coverage=1 00:24:50.909 --rc genhtml_legend=1 00:24:50.909 --rc geninfo_all_blocks=1 00:24:50.909 --rc geninfo_unexecuted_blocks=1 00:24:50.909 00:24:50.909 ' 00:24:50.909 05:35:05 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:50.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.909 --rc genhtml_branch_coverage=1 00:24:50.909 --rc genhtml_function_coverage=1 00:24:50.909 --rc genhtml_legend=1 00:24:50.909 --rc geninfo_all_blocks=1 00:24:50.909 --rc geninfo_unexecuted_blocks=1 00:24:50.909 00:24:50.909 ' 00:24:50.909 05:35:05 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:50.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.909 --rc genhtml_branch_coverage=1 00:24:50.909 --rc genhtml_function_coverage=1 00:24:50.909 --rc genhtml_legend=1 00:24:50.909 --rc geninfo_all_blocks=1 00:24:50.909 --rc geninfo_unexecuted_blocks=1 00:24:50.909 00:24:50.909 ' 00:24:50.909 05:35:05 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:50.909 05:35:05 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:24:50.909 05:35:05 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:50.909 05:35:05 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:50.909 05:35:05 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:50.909 05:35:05 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:50.909 05:35:05 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:50.909 05:35:05 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:50.909 05:35:05 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:50.909 05:35:05 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:50.909 05:35:05 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:50.909 05:35:05 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:50.909 05:35:05 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:24:50.909 05:35:05 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:24:50.909 05:35:05 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:50.909 05:35:05 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:50.909 05:35:05 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:50.909 05:35:05 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:50.909 05:35:05 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.909 05:35:05 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.909 05:35:05 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.909 05:35:05 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.910 05:35:05 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.910 05:35:05 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:24:50.910 05:35:05 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:50.910 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:50.910 05:35:05 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:24:50.910 05:35:05 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:24:50.910 05:35:05 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:24:50.910 05:35:05 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:24:50.910 05:35:05 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.910 05:35:05 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:50.910 05:35:05 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:50.910 Cannot find device "nvmf_init_br" 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@162 -- # true 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:50.910 Cannot find device "nvmf_init_br2" 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@163 -- # true 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:50.910 Cannot find device "nvmf_tgt_br" 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@164 -- # true 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:50.910 Cannot find device "nvmf_tgt_br2" 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@165 -- # true 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:50.910 Cannot find device "nvmf_init_br" 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@166 -- # true 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:50.910 Cannot find device "nvmf_init_br2" 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@167 -- # true 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:50.910 Cannot find device "nvmf_tgt_br" 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@168 -- # true 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:50.910 Cannot find device "nvmf_tgt_br2" 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@169 -- # true 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:50.910 Cannot find device "nvmf_br" 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@170 -- # true 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:50.910 Cannot find device "nvmf_init_if" 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@171 -- # true 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:50.910 Cannot find device "nvmf_init_if2" 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@172 -- # true 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:50.910 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@173 -- # true 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:50.910 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@174 -- # true 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:50.910 05:35:05 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:51.168 05:35:05 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:51.168 05:35:05 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:51.169 05:35:05 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:51.169 05:35:05 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:51.169 05:35:05 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:51.169 05:35:05 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:51.169 05:35:05 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:51.169 05:35:05 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:51.169 05:35:05 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:51.169 05:35:05 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:51.169 05:35:05 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:51.169 05:35:05 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:51.169 05:35:05 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:51.169 05:35:05 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:51.169 05:35:05 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:51.169 05:35:05 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:51.169 05:35:05 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:51.169 05:35:05 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:51.169 05:35:05 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:51.169 05:35:05 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:51.169 05:35:05 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:51.169 05:35:05 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:51.169 05:35:05 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:51.169 05:35:05 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:51.169 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:51.169 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:24:51.169 00:24:51.169 --- 10.0.0.3 ping statistics --- 00:24:51.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.169 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:24:51.169 05:35:05 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:51.169 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:51.169 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:24:51.169 00:24:51.169 --- 10.0.0.4 ping statistics --- 00:24:51.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.169 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:24:51.169 05:35:05 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:51.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:51.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:24:51.169 00:24:51.169 --- 10.0.0.1 ping statistics --- 00:24:51.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.169 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:24:51.169 05:35:05 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:51.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:51.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 00:24:51.169 00:24:51.169 --- 10.0.0.2 ping statistics --- 00:24:51.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.169 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:24:51.169 05:35:05 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:51.169 05:35:05 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:24:51.169 05:35:05 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:24:51.169 05:35:05 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:51.427 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:51.427 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:51.427 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:51.427 05:35:05 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:51.427 05:35:05 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:51.427 05:35:05 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:51.427 05:35:05 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:51.427 05:35:05 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:51.427 05:35:05 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:51.687 05:35:05 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:24:51.687 05:35:05 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:24:51.687 05:35:05 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:51.687 05:35:05 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:51.687 05:35:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:51.687 05:35:05 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=83434 00:24:51.687 05:35:05 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:51.687 05:35:05 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 83434 00:24:51.687 05:35:05 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 83434 ']' 00:24:51.687 05:35:05 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.687 05:35:05 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:51.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.687 05:35:05 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.687 05:35:05 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:51.687 05:35:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:51.687 [2024-11-20 05:35:05.997325] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:24:51.687 [2024-11-20 05:35:05.997407] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:51.687 [2024-11-20 05:35:06.145311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.687 [2024-11-20 05:35:06.179831] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.687 [2024-11-20 05:35:06.179938] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.687 [2024-11-20 05:35:06.179971] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:51.687 [2024-11-20 05:35:06.179985] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:51.687 [2024-11-20 05:35:06.179998] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.687 [2024-11-20 05:35:06.180387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.946 [2024-11-20 05:35:06.215320] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:52.880 05:35:07 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:52.880 05:35:07 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:24:52.880 05:35:07 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:52.880 05:35:07 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:52.880 05:35:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:52.880 05:35:07 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:52.880 05:35:07 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:24:52.880 05:35:07 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:24:52.880 05:35:07 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.880 05:35:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:52.880 [2024-11-20 05:35:07.085050] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:52.880 05:35:07 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.880 05:35:07 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:24:52.880 05:35:07 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:52.880 05:35:07 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:52.880 05:35:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:52.880 ************************************ 00:24:52.880 START TEST fio_dif_1_default 00:24:52.880 ************************************ 00:24:52.880 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:24:52.880 05:35:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:24:52.880 05:35:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:24:52.880 05:35:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:24:52.880 05:35:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:24:52.880 05:35:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:24:52.880 05:35:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:52.880 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.880 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:52.880 bdev_null0 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:52.881 [2024-11-20 05:35:07.129172] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:52.881 { 00:24:52.881 "params": { 00:24:52.881 "name": "Nvme$subsystem", 00:24:52.881 "trtype": "$TEST_TRANSPORT", 00:24:52.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.881 "adrfam": "ipv4", 00:24:52.881 "trsvcid": "$NVMF_PORT", 00:24:52.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.881 "hdgst": ${hdgst:-false}, 00:24:52.881 "ddgst": ${ddgst:-false} 00:24:52.881 }, 00:24:52.881 "method": "bdev_nvme_attach_controller" 00:24:52.881 } 00:24:52.881 EOF 00:24:52.881 )") 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:52.881 "params": { 00:24:52.881 "name": "Nvme0", 00:24:52.881 "trtype": "tcp", 00:24:52.881 "traddr": "10.0.0.3", 00:24:52.881 "adrfam": "ipv4", 00:24:52.881 "trsvcid": "4420", 00:24:52.881 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:52.881 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:52.881 "hdgst": false, 00:24:52.881 "ddgst": false 00:24:52.881 }, 00:24:52.881 "method": "bdev_nvme_attach_controller" 00:24:52.881 }' 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:52.881 05:35:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:52.881 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:52.881 fio-3.35 00:24:52.881 Starting 1 thread 00:25:05.081 00:25:05.081 filename0: (groupid=0, jobs=1): err= 0: pid=83501: Wed Nov 20 05:35:17 2024 00:25:05.081 read: IOPS=7968, BW=31.1MiB/s (32.6MB/s)(311MiB/10001msec) 00:25:05.081 slat (usec): min=7, max=272, avg= 9.53, stdev= 3.17 00:25:05.081 clat (usec): min=408, max=4359, avg=473.45, stdev=59.30 00:25:05.081 lat (usec): min=416, max=4396, avg=482.98, stdev=59.96 00:25:05.081 clat percentiles (usec): 00:25:05.081 | 1.00th=[ 416], 5.00th=[ 424], 10.00th=[ 429], 20.00th=[ 437], 00:25:05.081 | 30.00th=[ 445], 40.00th=[ 449], 50.00th=[ 457], 60.00th=[ 465], 00:25:05.081 | 70.00th=[ 478], 80.00th=[ 502], 90.00th=[ 562], 95.00th=[ 586], 00:25:05.082 | 99.00th=[ 627], 99.50th=[ 644], 99.90th=[ 668], 99.95th=[ 676], 00:25:05.082 | 99.99th=[ 2540] 00:25:05.082 bw ( KiB/s): min=28384, max=33696, per=100.00%, avg=31905.68, stdev=1401.58, samples=19 00:25:05.082 iops : min= 7096, max= 8424, avg=7976.42, stdev=350.40, samples=19 00:25:05.082 lat (usec) : 500=79.36%, 750=20.62% 00:25:05.082 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:25:05.082 cpu : usr=84.58%, sys=13.34%, ctx=26, majf=0, minf=9 00:25:05.082 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:05.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:05.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:05.082 issued rwts: total=79688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:05.082 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:05.082 00:25:05.082 Run status group 0 (all jobs): 00:25:05.082 READ: bw=31.1MiB/s (32.6MB/s), 31.1MiB/s-31.1MiB/s (32.6MB/s-32.6MB/s), io=311MiB (326MB), run=10001-10001msec 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.082 00:25:05.082 real 0m10.952s 00:25:05.082 user 0m9.062s 00:25:05.082 sys 0m1.578s 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:05.082 ************************************ 00:25:05.082 END TEST fio_dif_1_default 00:25:05.082 ************************************ 00:25:05.082 05:35:18 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:05.082 05:35:18 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:05.082 05:35:18 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:05.082 05:35:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:05.082 ************************************ 00:25:05.082 START TEST fio_dif_1_multi_subsystems 00:25:05.082 ************************************ 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:05.082 bdev_null0 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:05.082 [2024-11-20 05:35:18.120956] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:05.082 bdev_null1 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:05.082 { 00:25:05.082 "params": { 00:25:05.082 "name": "Nvme$subsystem", 00:25:05.082 "trtype": "$TEST_TRANSPORT", 00:25:05.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:05.082 "adrfam": "ipv4", 00:25:05.082 "trsvcid": "$NVMF_PORT", 00:25:05.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:05.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:05.082 "hdgst": ${hdgst:-false}, 00:25:05.082 "ddgst": ${ddgst:-false} 00:25:05.082 }, 00:25:05.082 "method": "bdev_nvme_attach_controller" 00:25:05.082 } 00:25:05.082 EOF 00:25:05.082 )") 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:05.082 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:05.082 { 00:25:05.082 "params": { 00:25:05.082 "name": "Nvme$subsystem", 00:25:05.082 "trtype": "$TEST_TRANSPORT", 00:25:05.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:05.083 "adrfam": "ipv4", 00:25:05.083 "trsvcid": "$NVMF_PORT", 00:25:05.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:05.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:05.083 "hdgst": ${hdgst:-false}, 00:25:05.083 "ddgst": ${ddgst:-false} 00:25:05.083 }, 00:25:05.083 "method": "bdev_nvme_attach_controller" 00:25:05.083 } 00:25:05.083 EOF 00:25:05.083 )") 00:25:05.083 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:25:05.083 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:05.083 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:25:05.083 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:25:05.083 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:25:05.083 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:05.083 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:25:05.083 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:25:05.083 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:05.083 "params": { 00:25:05.083 "name": "Nvme0", 00:25:05.083 "trtype": "tcp", 00:25:05.083 "traddr": "10.0.0.3", 00:25:05.083 "adrfam": "ipv4", 00:25:05.083 "trsvcid": "4420", 00:25:05.083 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:05.083 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:05.083 "hdgst": false, 00:25:05.083 "ddgst": false 00:25:05.083 }, 00:25:05.083 "method": "bdev_nvme_attach_controller" 00:25:05.083 },{ 00:25:05.083 "params": { 00:25:05.083 "name": "Nvme1", 00:25:05.083 "trtype": "tcp", 00:25:05.083 "traddr": "10.0.0.3", 00:25:05.083 "adrfam": "ipv4", 00:25:05.083 "trsvcid": "4420", 00:25:05.083 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:05.083 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:05.083 "hdgst": false, 00:25:05.083 "ddgst": false 00:25:05.083 }, 00:25:05.083 "method": "bdev_nvme_attach_controller" 00:25:05.083 }' 00:25:05.083 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:05.083 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:05.083 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:05.083 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:05.083 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:25:05.083 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:05.083 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:05.083 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:05.083 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:05.083 05:35:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:05.083 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:05.083 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:05.083 fio-3.35 00:25:05.083 Starting 2 threads 00:25:15.155 00:25:15.155 filename0: (groupid=0, jobs=1): err= 0: pid=83661: Wed Nov 20 05:35:28 2024 00:25:15.155 read: IOPS=4625, BW=18.1MiB/s (18.9MB/s)(181MiB/10001msec) 00:25:15.155 slat (nsec): min=4458, max=57739, avg=13418.42, stdev=4093.38 00:25:15.155 clat (usec): min=424, max=5781, avg=826.91, stdev=82.18 00:25:15.155 lat (usec): min=432, max=5798, avg=840.33, stdev=82.28 00:25:15.155 clat percentiles (usec): 00:25:15.155 | 1.00th=[ 750], 5.00th=[ 775], 10.00th=[ 783], 20.00th=[ 791], 00:25:15.155 | 30.00th=[ 799], 40.00th=[ 807], 50.00th=[ 816], 60.00th=[ 824], 00:25:15.155 | 70.00th=[ 832], 80.00th=[ 840], 90.00th=[ 865], 95.00th=[ 996], 00:25:15.155 | 99.00th=[ 1106], 99.50th=[ 1123], 99.90th=[ 1156], 99.95th=[ 1172], 00:25:15.155 | 99.99th=[ 1221] 00:25:15.155 bw ( KiB/s): min=14624, max=19040, per=50.09%, avg=18496.21, stdev=981.04, samples=19 00:25:15.155 iops : min= 3656, max= 4760, avg=4624.05, stdev=245.26, samples=19 00:25:15.155 lat (usec) : 500=0.35%, 750=0.50%, 1000=94.19% 00:25:15.155 lat (msec) : 2=4.95%, 10=0.01% 00:25:15.155 cpu : usr=87.70%, sys=10.77%, ctx=13, majf=0, minf=0 00:25:15.155 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:15.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:15.155 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:15.155 issued rwts: total=46256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:15.155 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:15.155 filename1: (groupid=0, jobs=1): err= 0: pid=83662: Wed Nov 20 05:35:28 2024 00:25:15.155 read: IOPS=4606, BW=18.0MiB/s (18.9MB/s)(180MiB/10001msec) 00:25:15.155 slat (usec): min=4, max=633, avg=14.30, stdev= 7.08 00:25:15.155 clat (usec): min=507, max=6534, avg=827.42, stdev=109.48 00:25:15.155 lat (usec): min=515, max=6564, avg=841.72, stdev=110.27 00:25:15.155 clat percentiles (usec): 00:25:15.155 | 1.00th=[ 701], 5.00th=[ 734], 10.00th=[ 742], 20.00th=[ 775], 00:25:15.155 | 30.00th=[ 799], 40.00th=[ 807], 50.00th=[ 824], 60.00th=[ 832], 00:25:15.155 | 70.00th=[ 840], 80.00th=[ 857], 90.00th=[ 881], 95.00th=[ 963], 00:25:15.155 | 99.00th=[ 1139], 99.50th=[ 1172], 99.90th=[ 1254], 99.95th=[ 1450], 00:25:15.155 | 99.99th=[ 4424] 00:25:15.155 bw ( KiB/s): min=14592, max=18976, per=49.88%, avg=18418.53, stdev=981.70, samples=19 00:25:15.155 iops : min= 3648, max= 4744, avg=4604.63, stdev=245.42, samples=19 00:25:15.155 lat (usec) : 750=12.23%, 1000=83.98% 00:25:15.155 lat (msec) : 2=3.76%, 10=0.03% 00:25:15.155 cpu : usr=87.09%, sys=10.72%, ctx=99, majf=0, minf=0 00:25:15.155 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:15.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:15.155 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:15.155 issued rwts: total=46072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:15.155 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:15.155 00:25:15.155 Run status group 0 (all jobs): 00:25:15.155 READ: bw=36.1MiB/s (37.8MB/s), 18.0MiB/s-18.1MiB/s (18.9MB/s-18.9MB/s), io=361MiB (378MB), run=10001-10001msec 00:25:15.155 05:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:25:15.155 05:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:25:15.155 05:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:25:15.155 05:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:15.155 05:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:25:15.155 05:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:15.155 05:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.155 05:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:15.155 05:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.155 05:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:15.155 05:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.155 05:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:15.155 05:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.155 05:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:25:15.155 05:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:15.155 05:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:25:15.155 05:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:15.155 05:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.155 05:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:15.155 05:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.155 05:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:15.155 05:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.155 05:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:15.155 05:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.155 00:25:15.155 real 0m11.061s 00:25:15.155 user 0m18.199s 00:25:15.155 sys 0m2.389s 00:25:15.155 05:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:15.155 ************************************ 00:25:15.155 END TEST fio_dif_1_multi_subsystems 00:25:15.155 ************************************ 00:25:15.155 05:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:15.155 05:35:29 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:25:15.155 05:35:29 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:15.155 05:35:29 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:15.155 05:35:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:15.155 ************************************ 00:25:15.155 START TEST fio_dif_rand_params 00:25:15.155 ************************************ 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:15.155 bdev_null0 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:15.155 [2024-11-20 05:35:29.223492] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:25:15.155 05:35:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:15.156 { 00:25:15.156 "params": { 00:25:15.156 "name": "Nvme$subsystem", 00:25:15.156 "trtype": "$TEST_TRANSPORT", 00:25:15.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.156 "adrfam": "ipv4", 00:25:15.156 "trsvcid": "$NVMF_PORT", 00:25:15.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.156 "hdgst": ${hdgst:-false}, 00:25:15.156 "ddgst": ${ddgst:-false} 00:25:15.156 }, 00:25:15.156 "method": "bdev_nvme_attach_controller" 00:25:15.156 } 00:25:15.156 EOF 00:25:15.156 )") 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:15.156 "params": { 00:25:15.156 "name": "Nvme0", 00:25:15.156 "trtype": "tcp", 00:25:15.156 "traddr": "10.0.0.3", 00:25:15.156 "adrfam": "ipv4", 00:25:15.156 "trsvcid": "4420", 00:25:15.156 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:15.156 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:15.156 "hdgst": false, 00:25:15.156 "ddgst": false 00:25:15.156 }, 00:25:15.156 "method": "bdev_nvme_attach_controller" 00:25:15.156 }' 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:15.156 05:35:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:15.156 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:15.156 ... 00:25:15.156 fio-3.35 00:25:15.156 Starting 3 threads 00:25:21.715 00:25:21.715 filename0: (groupid=0, jobs=1): err= 0: pid=83809: Wed Nov 20 05:35:34 2024 00:25:21.715 read: IOPS=252, BW=31.5MiB/s (33.1MB/s)(158MiB/5004msec) 00:25:21.715 slat (nsec): min=8064, max=55845, avg=18139.76, stdev=6214.47 00:25:21.715 clat (usec): min=11628, max=14645, avg=11842.30, stdev=348.30 00:25:21.715 lat (usec): min=11638, max=14662, avg=11860.44, stdev=348.75 00:25:21.715 clat percentiles (usec): 00:25:21.715 | 1.00th=[11600], 5.00th=[11731], 10.00th=[11731], 20.00th=[11731], 00:25:21.715 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11731], 60.00th=[11731], 00:25:21.715 | 70.00th=[11863], 80.00th=[11863], 90.00th=[11994], 95.00th=[12125], 00:25:21.715 | 99.00th=[14091], 99.50th=[14484], 99.90th=[14615], 99.95th=[14615], 00:25:21.715 | 99.99th=[14615] 00:25:21.715 bw ( KiB/s): min=32256, max=33024, per=33.39%, avg=32341.33, stdev=256.00, samples=9 00:25:21.715 iops : min= 252, max= 258, avg=252.67, stdev= 2.00, samples=9 00:25:21.715 lat (msec) : 20=100.00% 00:25:21.715 cpu : usr=90.07%, sys=9.29%, ctx=9, majf=0, minf=0 00:25:21.715 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:21.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:21.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:21.715 issued rwts: total=1263,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:21.715 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:21.715 filename0: (groupid=0, jobs=1): err= 0: pid=83810: Wed Nov 20 05:35:34 2024 00:25:21.715 read: IOPS=252, BW=31.5MiB/s (33.1MB/s)(158MiB/5006msec) 00:25:21.715 slat (usec): min=4, max=188, avg=17.95, stdev= 7.41 00:25:21.715 clat (usec): min=11404, max=14692, avg=11848.56, stdev=376.13 00:25:21.715 lat (usec): min=11419, max=14708, avg=11866.51, stdev=376.14 00:25:21.715 clat percentiles (usec): 00:25:21.715 | 1.00th=[11600], 5.00th=[11600], 10.00th=[11731], 20.00th=[11731], 00:25:21.715 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11731], 60.00th=[11731], 00:25:21.715 | 70.00th=[11863], 80.00th=[11863], 90.00th=[11994], 95.00th=[12256], 00:25:21.715 | 99.00th=[14222], 99.50th=[14615], 99.90th=[14746], 99.95th=[14746], 00:25:21.715 | 99.99th=[14746] 00:25:21.715 bw ( KiB/s): min=31488, max=33024, per=33.30%, avg=32256.00, stdev=362.04, samples=10 00:25:21.715 iops : min= 246, max= 258, avg=252.00, stdev= 2.83, samples=10 00:25:21.715 lat (msec) : 20=100.00% 00:25:21.715 cpu : usr=89.85%, sys=9.21%, ctx=47, majf=0, minf=0 00:25:21.715 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:21.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:21.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:21.715 issued rwts: total=1263,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:21.715 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:21.715 filename0: (groupid=0, jobs=1): err= 0: pid=83811: Wed Nov 20 05:35:34 2024 00:25:21.715 read: IOPS=252, BW=31.5MiB/s (33.1MB/s)(158MiB/5007msec) 00:25:21.715 slat (nsec): min=4766, max=59845, avg=17501.52, stdev=6514.66 00:25:21.715 clat (usec): min=11579, max=15828, avg=11852.40, stdev=400.46 00:25:21.715 lat (usec): min=11592, max=15857, avg=11869.90, stdev=400.85 00:25:21.715 clat percentiles (usec): 00:25:21.716 | 1.00th=[11600], 5.00th=[11600], 10.00th=[11731], 20.00th=[11731], 00:25:21.716 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11731], 60.00th=[11731], 00:25:21.716 | 70.00th=[11863], 80.00th=[11863], 90.00th=[11994], 95.00th=[12256], 00:25:21.716 | 99.00th=[14222], 99.50th=[14484], 99.90th=[15795], 99.95th=[15795], 00:25:21.716 | 99.99th=[15795] 00:25:21.716 bw ( KiB/s): min=31488, max=33024, per=33.30%, avg=32256.00, stdev=362.04, samples=10 00:25:21.716 iops : min= 246, max= 258, avg=252.00, stdev= 2.83, samples=10 00:25:21.716 lat (msec) : 20=100.00% 00:25:21.716 cpu : usr=89.39%, sys=9.91%, ctx=14, majf=0, minf=0 00:25:21.716 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:21.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:21.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:21.716 issued rwts: total=1263,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:21.716 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:21.716 00:25:21.716 Run status group 0 (all jobs): 00:25:21.716 READ: bw=94.6MiB/s (99.2MB/s), 31.5MiB/s-31.5MiB/s (33.1MB/s-33.1MB/s), io=474MiB (497MB), run=5004-5007msec 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:21.716 bdev_null0 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:21.716 [2024-11-20 05:35:35.148309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:21.716 bdev_null1 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:21.716 bdev_null2 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:21.716 05:35:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:21.716 { 00:25:21.716 "params": { 00:25:21.716 "name": "Nvme$subsystem", 00:25:21.716 "trtype": "$TEST_TRANSPORT", 00:25:21.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:21.717 "adrfam": "ipv4", 00:25:21.717 "trsvcid": "$NVMF_PORT", 00:25:21.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:21.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:21.717 "hdgst": ${hdgst:-false}, 00:25:21.717 "ddgst": ${ddgst:-false} 00:25:21.717 }, 00:25:21.717 "method": "bdev_nvme_attach_controller" 00:25:21.717 } 00:25:21.717 EOF 00:25:21.717 )") 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:21.717 { 00:25:21.717 "params": { 00:25:21.717 "name": "Nvme$subsystem", 00:25:21.717 "trtype": "$TEST_TRANSPORT", 00:25:21.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:21.717 "adrfam": "ipv4", 00:25:21.717 "trsvcid": "$NVMF_PORT", 00:25:21.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:21.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:21.717 "hdgst": ${hdgst:-false}, 00:25:21.717 "ddgst": ${ddgst:-false} 00:25:21.717 }, 00:25:21.717 "method": "bdev_nvme_attach_controller" 00:25:21.717 } 00:25:21.717 EOF 00:25:21.717 )") 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:21.717 { 00:25:21.717 "params": { 00:25:21.717 "name": "Nvme$subsystem", 00:25:21.717 "trtype": "$TEST_TRANSPORT", 00:25:21.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:21.717 "adrfam": "ipv4", 00:25:21.717 "trsvcid": "$NVMF_PORT", 00:25:21.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:21.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:21.717 "hdgst": ${hdgst:-false}, 00:25:21.717 "ddgst": ${ddgst:-false} 00:25:21.717 }, 00:25:21.717 "method": "bdev_nvme_attach_controller" 00:25:21.717 } 00:25:21.717 EOF 00:25:21.717 )") 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:21.717 "params": { 00:25:21.717 "name": "Nvme0", 00:25:21.717 "trtype": "tcp", 00:25:21.717 "traddr": "10.0.0.3", 00:25:21.717 "adrfam": "ipv4", 00:25:21.717 "trsvcid": "4420", 00:25:21.717 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:21.717 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:21.717 "hdgst": false, 00:25:21.717 "ddgst": false 00:25:21.717 }, 00:25:21.717 "method": "bdev_nvme_attach_controller" 00:25:21.717 },{ 00:25:21.717 "params": { 00:25:21.717 "name": "Nvme1", 00:25:21.717 "trtype": "tcp", 00:25:21.717 "traddr": "10.0.0.3", 00:25:21.717 "adrfam": "ipv4", 00:25:21.717 "trsvcid": "4420", 00:25:21.717 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:21.717 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:21.717 "hdgst": false, 00:25:21.717 "ddgst": false 00:25:21.717 }, 00:25:21.717 "method": "bdev_nvme_attach_controller" 00:25:21.717 },{ 00:25:21.717 "params": { 00:25:21.717 "name": "Nvme2", 00:25:21.717 "trtype": "tcp", 00:25:21.717 "traddr": "10.0.0.3", 00:25:21.717 "adrfam": "ipv4", 00:25:21.717 "trsvcid": "4420", 00:25:21.717 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:21.717 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:21.717 "hdgst": false, 00:25:21.717 "ddgst": false 00:25:21.717 }, 00:25:21.717 "method": "bdev_nvme_attach_controller" 00:25:21.717 }' 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:21.717 05:35:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:21.717 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:21.717 ... 00:25:21.717 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:21.717 ... 00:25:21.717 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:21.717 ... 00:25:21.717 fio-3.35 00:25:21.717 Starting 24 threads 00:25:33.949 00:25:33.949 filename0: (groupid=0, jobs=1): err= 0: pid=83912: Wed Nov 20 05:35:46 2024 00:25:33.949 read: IOPS=199, BW=798KiB/s (817kB/s)(7996KiB/10017msec) 00:25:33.949 slat (usec): min=8, max=8039, avg=28.81, stdev=253.89 00:25:33.949 clat (msec): min=28, max=152, avg=80.03, stdev=21.97 00:25:33.949 lat (msec): min=28, max=152, avg=80.06, stdev=21.97 00:25:33.949 clat percentiles (msec): 00:25:33.949 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 61], 00:25:33.949 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 84], 00:25:33.949 | 70.00th=[ 88], 80.00th=[ 100], 90.00th=[ 112], 95.00th=[ 121], 00:25:33.949 | 99.00th=[ 131], 99.50th=[ 133], 99.90th=[ 134], 99.95th=[ 153], 00:25:33.949 | 99.99th=[ 153] 00:25:33.949 bw ( KiB/s): min= 616, max= 948, per=4.24%, avg=795.45, stdev=106.73, samples=20 00:25:33.949 iops : min= 154, max= 237, avg=198.85, stdev=26.69, samples=20 00:25:33.949 lat (msec) : 50=9.10%, 100=71.79%, 250=19.11% 00:25:33.949 cpu : usr=39.82%, sys=2.63%, ctx=1206, majf=0, minf=9 00:25:33.949 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.8%, 16=15.9%, 32=0.0%, >=64=0.0% 00:25:33.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.949 complete : 0=0.0%, 4=87.1%, 8=12.6%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.950 issued rwts: total=1999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.950 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:33.950 filename0: (groupid=0, jobs=1): err= 0: pid=83913: Wed Nov 20 05:35:46 2024 00:25:33.950 read: IOPS=193, BW=775KiB/s (794kB/s)(7784KiB/10043msec) 00:25:33.950 slat (usec): min=3, max=8045, avg=23.61, stdev=216.34 00:25:33.950 clat (msec): min=34, max=156, avg=82.39, stdev=23.13 00:25:33.950 lat (msec): min=34, max=156, avg=82.41, stdev=23.13 00:25:33.950 clat percentiles (msec): 00:25:33.950 | 1.00th=[ 42], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 58], 00:25:33.950 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 81], 60.00th=[ 85], 00:25:33.950 | 70.00th=[ 95], 80.00th=[ 108], 90.00th=[ 115], 95.00th=[ 122], 00:25:33.950 | 99.00th=[ 132], 99.50th=[ 133], 99.90th=[ 157], 99.95th=[ 157], 00:25:33.950 | 99.99th=[ 157] 00:25:33.950 bw ( KiB/s): min= 528, max= 920, per=4.12%, avg=772.00, stdev=116.79, samples=20 00:25:33.950 iops : min= 132, max= 230, avg=193.00, stdev=29.20, samples=20 00:25:33.950 lat (msec) : 50=8.48%, 100=67.32%, 250=24.20% 00:25:33.950 cpu : usr=37.65%, sys=2.65%, ctx=1263, majf=0, minf=9 00:25:33.950 IO depths : 1=0.1%, 2=1.3%, 4=5.1%, 8=78.2%, 16=15.4%, 32=0.0%, >=64=0.0% 00:25:33.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.950 complete : 0=0.0%, 4=88.4%, 8=10.5%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.950 issued rwts: total=1946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.950 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:33.950 filename0: (groupid=0, jobs=1): err= 0: pid=83914: Wed Nov 20 05:35:46 2024 00:25:33.950 read: IOPS=191, BW=767KiB/s (785kB/s)(7672KiB/10004msec) 00:25:33.950 slat (usec): min=3, max=8048, avg=34.53, stdev=381.37 00:25:33.950 clat (msec): min=4, max=170, avg=83.28, stdev=26.50 00:25:33.950 lat (msec): min=4, max=170, avg=83.31, stdev=26.50 00:25:33.950 clat percentiles (msec): 00:25:33.950 | 1.00th=[ 35], 5.00th=[ 46], 10.00th=[ 51], 20.00th=[ 57], 00:25:33.950 | 30.00th=[ 70], 40.00th=[ 77], 50.00th=[ 81], 60.00th=[ 84], 00:25:33.950 | 70.00th=[ 95], 80.00th=[ 108], 90.00th=[ 123], 95.00th=[ 131], 00:25:33.950 | 99.00th=[ 146], 99.50th=[ 146], 99.90th=[ 171], 99.95th=[ 171], 00:25:33.950 | 99.99th=[ 171] 00:25:33.950 bw ( KiB/s): min= 512, max= 968, per=4.00%, avg=749.89, stdev=148.40, samples=19 00:25:33.950 iops : min= 128, max= 242, avg=187.47, stdev=37.10, samples=19 00:25:33.950 lat (msec) : 10=0.16%, 20=0.31%, 50=9.18%, 100=63.66%, 250=26.69% 00:25:33.950 cpu : usr=31.67%, sys=1.95%, ctx=1258, majf=0, minf=9 00:25:33.950 IO depths : 1=0.1%, 2=1.8%, 4=7.2%, 8=76.1%, 16=14.8%, 32=0.0%, >=64=0.0% 00:25:33.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.950 complete : 0=0.0%, 4=88.8%, 8=9.6%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.950 issued rwts: total=1918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.950 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:33.950 filename0: (groupid=0, jobs=1): err= 0: pid=83915: Wed Nov 20 05:35:46 2024 00:25:33.950 read: IOPS=200, BW=800KiB/s (819kB/s)(8024KiB/10030msec) 00:25:33.950 slat (usec): min=3, max=8056, avg=28.20, stdev=310.20 00:25:33.950 clat (msec): min=33, max=143, avg=79.81, stdev=22.14 00:25:33.950 lat (msec): min=33, max=143, avg=79.84, stdev=22.14 00:25:33.950 clat percentiles (msec): 00:25:33.950 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 61], 00:25:33.950 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 84], 00:25:33.950 | 70.00th=[ 87], 80.00th=[ 100], 90.00th=[ 109], 95.00th=[ 121], 00:25:33.950 | 99.00th=[ 132], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 144], 00:25:33.950 | 99.99th=[ 144] 00:25:33.950 bw ( KiB/s): min= 568, max= 968, per=4.26%, avg=798.80, stdev=106.43, samples=20 00:25:33.950 iops : min= 142, max= 242, avg=199.70, stdev=26.61, samples=20 00:25:33.950 lat (msec) : 50=10.82%, 100=70.44%, 250=18.74% 00:25:33.950 cpu : usr=32.73%, sys=2.22%, ctx=923, majf=0, minf=9 00:25:33.950 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.3%, 16=16.0%, 32=0.0%, >=64=0.0% 00:25:33.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.950 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.950 issued rwts: total=2006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.950 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:33.950 filename0: (groupid=0, jobs=1): err= 0: pid=83916: Wed Nov 20 05:35:46 2024 00:25:33.950 read: IOPS=200, BW=804KiB/s (823kB/s)(8084KiB/10055msec) 00:25:33.950 slat (usec): min=5, max=8070, avg=27.27, stdev=240.40 00:25:33.950 clat (msec): min=13, max=153, avg=79.39, stdev=23.64 00:25:33.950 lat (msec): min=13, max=153, avg=79.42, stdev=23.64 00:25:33.950 clat percentiles (msec): 00:25:33.950 | 1.00th=[ 20], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 58], 00:25:33.950 | 30.00th=[ 69], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 83], 00:25:33.950 | 70.00th=[ 88], 80.00th=[ 104], 90.00th=[ 113], 95.00th=[ 122], 00:25:33.950 | 99.00th=[ 131], 99.50th=[ 133], 99.90th=[ 136], 99.95th=[ 138], 00:25:33.950 | 99.99th=[ 155] 00:25:33.950 bw ( KiB/s): min= 584, max= 1136, per=4.28%, avg=801.75, stdev=133.90, samples=20 00:25:33.950 iops : min= 146, max= 284, avg=200.40, stdev=33.49, samples=20 00:25:33.950 lat (msec) : 20=1.48%, 50=9.85%, 100=67.74%, 250=20.93% 00:25:33.950 cpu : usr=41.59%, sys=3.00%, ctx=1362, majf=0, minf=9 00:25:33.950 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.8%, 16=16.2%, 32=0.0%, >=64=0.0% 00:25:33.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.950 complete : 0=0.0%, 4=87.4%, 8=12.5%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.950 issued rwts: total=2021,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.950 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:33.950 filename0: (groupid=0, jobs=1): err= 0: pid=83917: Wed Nov 20 05:35:46 2024 00:25:33.950 read: IOPS=201, BW=807KiB/s (826kB/s)(8080KiB/10014msec) 00:25:33.950 slat (usec): min=5, max=8045, avg=29.85, stdev=321.68 00:25:33.950 clat (msec): min=30, max=143, avg=79.14, stdev=22.44 00:25:33.950 lat (msec): min=30, max=143, avg=79.17, stdev=22.44 00:25:33.950 clat percentiles (msec): 00:25:33.950 | 1.00th=[ 38], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 61], 00:25:33.950 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 84], 00:25:33.950 | 70.00th=[ 86], 80.00th=[ 103], 90.00th=[ 110], 95.00th=[ 121], 00:25:33.950 | 99.00th=[ 131], 99.50th=[ 136], 99.90th=[ 140], 99.95th=[ 144], 00:25:33.950 | 99.99th=[ 144] 00:25:33.950 bw ( KiB/s): min= 617, max= 968, per=4.29%, avg=804.00, stdev=120.31, samples=20 00:25:33.950 iops : min= 154, max= 242, avg=200.95, stdev=30.14, samples=20 00:25:33.950 lat (msec) : 50=12.18%, 100=67.72%, 250=20.10% 00:25:33.950 cpu : usr=34.15%, sys=2.52%, ctx=962, majf=0, minf=9 00:25:33.950 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=83.2%, 16=15.7%, 32=0.0%, >=64=0.0% 00:25:33.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.950 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.950 issued rwts: total=2020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.950 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:33.950 filename0: (groupid=0, jobs=1): err= 0: pid=83918: Wed Nov 20 05:35:46 2024 00:25:33.950 read: IOPS=199, BW=797KiB/s (816kB/s)(7988KiB/10019msec) 00:25:33.950 slat (usec): min=3, max=8063, avg=34.00, stdev=370.42 00:25:33.950 clat (msec): min=26, max=146, avg=80.11, stdev=23.42 00:25:33.950 lat (msec): min=26, max=146, avg=80.14, stdev=23.43 00:25:33.950 clat percentiles (msec): 00:25:33.950 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 60], 00:25:33.950 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 84], 00:25:33.950 | 70.00th=[ 88], 80.00th=[ 105], 90.00th=[ 116], 95.00th=[ 121], 00:25:33.950 | 99.00th=[ 133], 99.50th=[ 148], 99.90th=[ 148], 99.95th=[ 148], 00:25:33.950 | 99.99th=[ 148] 00:25:33.950 bw ( KiB/s): min= 568, max= 968, per=4.24%, avg=794.60, stdev=126.17, samples=20 00:25:33.950 iops : min= 142, max= 242, avg=198.65, stdev=31.54, samples=20 00:25:33.950 lat (msec) : 50=12.92%, 100=66.75%, 250=20.33% 00:25:33.950 cpu : usr=31.01%, sys=1.90%, ctx=837, majf=0, minf=9 00:25:33.950 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.8%, 16=15.7%, 32=0.0%, >=64=0.0% 00:25:33.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.950 complete : 0=0.0%, 4=87.4%, 8=12.2%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.950 issued rwts: total=1997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.950 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:33.950 filename0: (groupid=0, jobs=1): err= 0: pid=83919: Wed Nov 20 05:35:46 2024 00:25:33.950 read: IOPS=193, BW=773KiB/s (792kB/s)(7784KiB/10066msec) 00:25:33.950 slat (usec): min=4, max=8028, avg=21.41, stdev=202.23 00:25:33.950 clat (msec): min=6, max=155, avg=82.50, stdev=26.05 00:25:33.950 lat (msec): min=6, max=155, avg=82.52, stdev=26.05 00:25:33.950 clat percentiles (msec): 00:25:33.950 | 1.00th=[ 19], 5.00th=[ 46], 10.00th=[ 51], 20.00th=[ 58], 00:25:33.950 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 82], 60.00th=[ 86], 00:25:33.950 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 125], 00:25:33.950 | 99.00th=[ 132], 99.50th=[ 138], 99.90th=[ 153], 99.95th=[ 157], 00:25:33.950 | 99.99th=[ 157] 00:25:33.950 bw ( KiB/s): min= 540, max= 1142, per=4.13%, avg=773.15, stdev=162.36, samples=20 00:25:33.950 iops : min= 135, max= 285, avg=193.20, stdev=40.55, samples=20 00:25:33.950 lat (msec) : 10=0.72%, 20=1.64%, 50=6.89%, 100=62.85%, 250=27.90% 00:25:33.950 cpu : usr=39.10%, sys=2.61%, ctx=1233, majf=0, minf=9 00:25:33.950 IO depths : 1=0.1%, 2=0.9%, 4=3.6%, 8=79.2%, 16=16.1%, 32=0.0%, >=64=0.0% 00:25:33.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.950 complete : 0=0.0%, 4=88.5%, 8=10.7%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.950 issued rwts: total=1946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.950 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:33.950 filename1: (groupid=0, jobs=1): err= 0: pid=83920: Wed Nov 20 05:35:46 2024 00:25:33.950 read: IOPS=190, BW=763KiB/s (781kB/s)(7692KiB/10081msec) 00:25:33.950 slat (nsec): min=5557, max=64072, avg=13948.31, stdev=5967.16 00:25:33.950 clat (msec): min=2, max=171, avg=83.61, stdev=33.09 00:25:33.950 lat (msec): min=2, max=171, avg=83.62, stdev=33.09 00:25:33.950 clat percentiles (msec): 00:25:33.950 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 48], 20.00th=[ 61], 00:25:33.950 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 83], 60.00th=[ 91], 00:25:33.950 | 70.00th=[ 104], 80.00th=[ 111], 90.00th=[ 122], 95.00th=[ 130], 00:25:33.950 | 99.00th=[ 159], 99.50th=[ 167], 99.90th=[ 171], 99.95th=[ 171], 00:25:33.951 | 99.99th=[ 171] 00:25:33.951 bw ( KiB/s): min= 496, max= 1908, per=4.07%, avg=762.20, stdev=300.04, samples=20 00:25:33.951 iops : min= 124, max= 477, avg=190.55, stdev=75.01, samples=20 00:25:33.951 lat (msec) : 4=4.16%, 10=2.29%, 20=1.04%, 50=4.11%, 100=55.23% 00:25:33.951 lat (msec) : 250=33.18% 00:25:33.951 cpu : usr=32.16%, sys=1.90%, ctx=1264, majf=0, minf=0 00:25:33.951 IO depths : 1=0.2%, 2=2.8%, 4=10.7%, 8=71.3%, 16=15.0%, 32=0.0%, >=64=0.0% 00:25:33.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.951 complete : 0=0.0%, 4=90.6%, 8=7.1%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.951 issued rwts: total=1923,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.951 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:33.951 filename1: (groupid=0, jobs=1): err= 0: pid=83921: Wed Nov 20 05:35:46 2024 00:25:33.951 read: IOPS=184, BW=738KiB/s (756kB/s)(7408KiB/10039msec) 00:25:33.951 slat (usec): min=5, max=8053, avg=28.26, stdev=323.15 00:25:33.951 clat (msec): min=31, max=168, avg=86.46, stdev=22.79 00:25:33.951 lat (msec): min=31, max=168, avg=86.49, stdev=22.79 00:25:33.951 clat percentiles (msec): 00:25:33.951 | 1.00th=[ 43], 5.00th=[ 50], 10.00th=[ 60], 20.00th=[ 72], 00:25:33.951 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 84], 60.00th=[ 86], 00:25:33.951 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 121], 00:25:33.951 | 99.00th=[ 146], 99.50th=[ 169], 99.90th=[ 169], 99.95th=[ 169], 00:25:33.951 | 99.99th=[ 169] 00:25:33.951 bw ( KiB/s): min= 512, max= 920, per=3.93%, avg=737.00, stdev=128.19, samples=20 00:25:33.951 iops : min= 128, max= 230, avg=184.20, stdev=32.05, samples=20 00:25:33.951 lat (msec) : 50=5.45%, 100=67.87%, 250=26.67% 00:25:33.951 cpu : usr=31.15%, sys=1.85%, ctx=852, majf=0, minf=9 00:25:33.951 IO depths : 1=0.1%, 2=2.1%, 4=8.2%, 8=74.4%, 16=15.3%, 32=0.0%, >=64=0.0% 00:25:33.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.951 complete : 0=0.0%, 4=89.7%, 8=8.6%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.951 issued rwts: total=1852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.951 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:33.951 filename1: (groupid=0, jobs=1): err= 0: pid=83922: Wed Nov 20 05:35:46 2024 00:25:33.951 read: IOPS=199, BW=799KiB/s (818kB/s)(8016KiB/10029msec) 00:25:33.951 slat (usec): min=3, max=8054, avg=24.10, stdev=219.94 00:25:33.951 clat (msec): min=28, max=144, avg=79.89, stdev=23.18 00:25:33.951 lat (msec): min=28, max=144, avg=79.91, stdev=23.18 00:25:33.951 clat percentiles (msec): 00:25:33.951 | 1.00th=[ 42], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 57], 00:25:33.951 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 82], 00:25:33.951 | 70.00th=[ 88], 80.00th=[ 106], 90.00th=[ 114], 95.00th=[ 121], 00:25:33.951 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 144], 99.95th=[ 144], 00:25:33.951 | 99.99th=[ 144] 00:25:33.951 bw ( KiB/s): min= 528, max= 1001, per=4.25%, avg=796.45, stdev=129.51, samples=20 00:25:33.951 iops : min= 132, max= 250, avg=199.10, stdev=32.36, samples=20 00:25:33.951 lat (msec) : 50=9.13%, 100=67.96%, 250=22.90% 00:25:33.951 cpu : usr=38.57%, sys=2.67%, ctx=1197, majf=0, minf=9 00:25:33.951 IO depths : 1=0.1%, 2=0.7%, 4=2.7%, 8=81.1%, 16=15.4%, 32=0.0%, >=64=0.0% 00:25:33.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.951 complete : 0=0.0%, 4=87.5%, 8=11.9%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.951 issued rwts: total=2004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.951 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:33.951 filename1: (groupid=0, jobs=1): err= 0: pid=83923: Wed Nov 20 05:35:46 2024 00:25:33.951 read: IOPS=200, BW=802KiB/s (821kB/s)(8036KiB/10017msec) 00:25:33.951 slat (usec): min=8, max=8041, avg=42.49, stdev=428.47 00:25:33.951 clat (msec): min=30, max=155, avg=79.53, stdev=22.97 00:25:33.951 lat (msec): min=30, max=155, avg=79.57, stdev=22.97 00:25:33.951 clat percentiles (msec): 00:25:33.951 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 58], 00:25:33.951 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 83], 00:25:33.951 | 70.00th=[ 86], 80.00th=[ 104], 90.00th=[ 115], 95.00th=[ 121], 00:25:33.951 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 140], 99.95th=[ 153], 00:25:33.951 | 99.99th=[ 157] 00:25:33.951 bw ( KiB/s): min= 608, max= 976, per=4.26%, avg=799.45, stdev=118.13, samples=20 00:25:33.951 iops : min= 152, max= 244, avg=199.85, stdev=29.55, samples=20 00:25:33.951 lat (msec) : 50=10.55%, 100=67.70%, 250=21.75% 00:25:33.951 cpu : usr=38.17%, sys=2.41%, ctx=1181, majf=0, minf=9 00:25:33.951 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=83.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:25:33.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.951 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.951 issued rwts: total=2009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.951 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:33.951 filename1: (groupid=0, jobs=1): err= 0: pid=83924: Wed Nov 20 05:35:46 2024 00:25:33.951 read: IOPS=188, BW=753KiB/s (771kB/s)(7532KiB/10004msec) 00:25:33.951 slat (usec): min=5, max=8025, avg=25.52, stdev=253.03 00:25:33.951 clat (msec): min=15, max=166, avg=84.88, stdev=25.21 00:25:33.951 lat (msec): min=15, max=166, avg=84.91, stdev=25.21 00:25:33.951 clat percentiles (msec): 00:25:33.951 | 1.00th=[ 35], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 59], 00:25:33.951 | 30.00th=[ 73], 40.00th=[ 78], 50.00th=[ 83], 60.00th=[ 90], 00:25:33.951 | 70.00th=[ 103], 80.00th=[ 108], 90.00th=[ 120], 95.00th=[ 128], 00:25:33.951 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 167], 99.95th=[ 167], 00:25:33.951 | 99.99th=[ 167] 00:25:33.951 bw ( KiB/s): min= 512, max= 928, per=3.93%, avg=736.21, stdev=150.97, samples=19 00:25:33.951 iops : min= 128, max= 232, avg=184.05, stdev=37.74, samples=19 00:25:33.951 lat (msec) : 20=0.32%, 50=7.01%, 100=61.66%, 250=31.01% 00:25:33.951 cpu : usr=44.51%, sys=2.97%, ctx=1362, majf=0, minf=9 00:25:33.951 IO depths : 1=0.1%, 2=2.3%, 4=9.3%, 8=73.8%, 16=14.5%, 32=0.0%, >=64=0.0% 00:25:33.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.951 complete : 0=0.0%, 4=89.4%, 8=8.5%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.951 issued rwts: total=1883,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.951 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:33.951 filename1: (groupid=0, jobs=1): err= 0: pid=83925: Wed Nov 20 05:35:46 2024 00:25:33.951 read: IOPS=199, BW=796KiB/s (815kB/s)(7968KiB/10009msec) 00:25:33.951 slat (usec): min=5, max=8048, avg=35.88, stdev=381.23 00:25:33.951 clat (msec): min=30, max=140, avg=80.17, stdev=23.45 00:25:33.951 lat (msec): min=30, max=140, avg=80.21, stdev=23.46 00:25:33.951 clat percentiles (msec): 00:25:33.951 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 59], 00:25:33.951 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 84], 00:25:33.951 | 70.00th=[ 89], 80.00th=[ 106], 90.00th=[ 116], 95.00th=[ 121], 00:25:33.951 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 140], 99.95th=[ 140], 00:25:33.951 | 99.99th=[ 140] 00:25:33.951 bw ( KiB/s): min= 528, max= 920, per=4.18%, avg=784.11, stdev=130.93, samples=19 00:25:33.951 iops : min= 132, max= 230, avg=196.00, stdev=32.75, samples=19 00:25:33.951 lat (msec) : 50=12.70%, 100=64.76%, 250=22.54% 00:25:33.951 cpu : usr=35.13%, sys=2.65%, ctx=969, majf=0, minf=9 00:25:33.951 IO depths : 1=0.1%, 2=0.7%, 4=2.8%, 8=81.1%, 16=15.4%, 32=0.0%, >=64=0.0% 00:25:33.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.951 complete : 0=0.0%, 4=87.4%, 8=12.0%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.951 issued rwts: total=1992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.951 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:33.951 filename1: (groupid=0, jobs=1): err= 0: pid=83926: Wed Nov 20 05:35:46 2024 00:25:33.951 read: IOPS=198, BW=795KiB/s (814kB/s)(7988KiB/10051msec) 00:25:33.951 slat (usec): min=3, max=4040, avg=25.69, stdev=201.07 00:25:33.951 clat (msec): min=7, max=154, avg=80.27, stdev=24.75 00:25:33.951 lat (msec): min=7, max=154, avg=80.30, stdev=24.76 00:25:33.951 clat percentiles (msec): 00:25:33.951 | 1.00th=[ 18], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 58], 00:25:33.951 | 30.00th=[ 68], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 84], 00:25:33.951 | 70.00th=[ 91], 80.00th=[ 106], 90.00th=[ 116], 95.00th=[ 121], 00:25:33.951 | 99.00th=[ 131], 99.50th=[ 134], 99.90th=[ 144], 99.95th=[ 155], 00:25:33.951 | 99.99th=[ 155] 00:25:33.951 bw ( KiB/s): min= 564, max= 1142, per=4.23%, avg=793.55, stdev=147.95, samples=20 00:25:33.951 iops : min= 141, max= 285, avg=198.30, stdev=36.95, samples=20 00:25:33.951 lat (msec) : 10=0.70%, 20=1.60%, 50=7.86%, 100=65.30%, 250=24.54% 00:25:33.951 cpu : usr=41.75%, sys=2.55%, ctx=1261, majf=0, minf=0 00:25:33.951 IO depths : 1=0.1%, 2=0.9%, 4=3.4%, 8=80.0%, 16=15.8%, 32=0.0%, >=64=0.0% 00:25:33.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.951 complete : 0=0.0%, 4=88.1%, 8=11.2%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.951 issued rwts: total=1997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.951 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:33.951 filename1: (groupid=0, jobs=1): err= 0: pid=83927: Wed Nov 20 05:35:46 2024 00:25:33.951 read: IOPS=180, BW=723KiB/s (740kB/s)(7240KiB/10019msec) 00:25:33.951 slat (usec): min=7, max=8031, avg=25.86, stdev=241.58 00:25:33.951 clat (msec): min=28, max=154, avg=88.39, stdev=24.12 00:25:33.951 lat (msec): min=28, max=154, avg=88.42, stdev=24.12 00:25:33.951 clat percentiles (msec): 00:25:33.951 | 1.00th=[ 41], 5.00th=[ 50], 10.00th=[ 56], 20.00th=[ 70], 00:25:33.951 | 30.00th=[ 75], 40.00th=[ 81], 50.00th=[ 86], 60.00th=[ 93], 00:25:33.951 | 70.00th=[ 106], 80.00th=[ 113], 90.00th=[ 121], 95.00th=[ 125], 00:25:33.951 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 155], 99.95th=[ 155], 00:25:33.951 | 99.99th=[ 155] 00:25:33.951 bw ( KiB/s): min= 512, max= 988, per=3.84%, avg=719.80, stdev=144.82, samples=20 00:25:33.951 iops : min= 128, max= 247, avg=179.95, stdev=36.20, samples=20 00:25:33.951 lat (msec) : 50=5.52%, 100=59.45%, 250=35.03% 00:25:33.951 cpu : usr=42.02%, sys=2.92%, ctx=1389, majf=0, minf=9 00:25:33.951 IO depths : 1=0.1%, 2=3.4%, 4=13.4%, 8=69.1%, 16=14.0%, 32=0.0%, >=64=0.0% 00:25:33.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.951 complete : 0=0.0%, 4=90.8%, 8=6.2%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.951 issued rwts: total=1810,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.951 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:33.951 filename2: (groupid=0, jobs=1): err= 0: pid=83928: Wed Nov 20 05:35:46 2024 00:25:33.951 read: IOPS=198, BW=793KiB/s (812kB/s)(7936KiB/10005msec) 00:25:33.952 slat (usec): min=3, max=8059, avg=19.02, stdev=180.68 00:25:33.952 clat (msec): min=3, max=176, avg=80.55, stdev=26.51 00:25:33.952 lat (msec): min=3, max=176, avg=80.57, stdev=26.51 00:25:33.952 clat percentiles (msec): 00:25:33.952 | 1.00th=[ 5], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 60], 00:25:33.952 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 85], 00:25:33.952 | 70.00th=[ 94], 80.00th=[ 108], 90.00th=[ 120], 95.00th=[ 121], 00:25:33.952 | 99.00th=[ 144], 99.50th=[ 165], 99.90th=[ 178], 99.95th=[ 178], 00:25:33.952 | 99.99th=[ 178] 00:25:33.952 bw ( KiB/s): min= 512, max= 976, per=4.08%, avg=764.63, stdev=135.04, samples=19 00:25:33.952 iops : min= 128, max= 244, avg=191.16, stdev=33.76, samples=19 00:25:33.952 lat (msec) : 4=0.20%, 10=1.31%, 20=0.35%, 50=12.60%, 100=61.84% 00:25:33.952 lat (msec) : 250=23.69% 00:25:33.952 cpu : usr=31.03%, sys=1.95%, ctx=839, majf=0, minf=9 00:25:33.952 IO depths : 1=0.1%, 2=1.4%, 4=5.4%, 8=78.3%, 16=14.9%, 32=0.0%, >=64=0.0% 00:25:33.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.952 complete : 0=0.0%, 4=88.2%, 8=10.6%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.952 issued rwts: total=1984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.952 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:33.952 filename2: (groupid=0, jobs=1): err= 0: pid=83929: Wed Nov 20 05:35:46 2024 00:25:33.952 read: IOPS=200, BW=803KiB/s (822kB/s)(8068KiB/10049msec) 00:25:33.952 slat (usec): min=5, max=7984, avg=26.39, stdev=264.14 00:25:33.952 clat (msec): min=26, max=148, avg=79.51, stdev=22.41 00:25:33.952 lat (msec): min=26, max=148, avg=79.53, stdev=22.41 00:25:33.952 clat percentiles (msec): 00:25:33.952 | 1.00th=[ 41], 5.00th=[ 46], 10.00th=[ 51], 20.00th=[ 58], 00:25:33.952 | 30.00th=[ 69], 40.00th=[ 74], 50.00th=[ 79], 60.00th=[ 82], 00:25:33.952 | 70.00th=[ 87], 80.00th=[ 103], 90.00th=[ 113], 95.00th=[ 121], 00:25:33.952 | 99.00th=[ 131], 99.50th=[ 133], 99.90th=[ 140], 99.95th=[ 140], 00:25:33.952 | 99.99th=[ 148] 00:25:33.952 bw ( KiB/s): min= 608, max= 952, per=4.27%, avg=800.40, stdev=111.95, samples=20 00:25:33.952 iops : min= 152, max= 238, avg=200.10, stdev=27.99, samples=20 00:25:33.952 lat (msec) : 50=9.07%, 100=69.96%, 250=20.97% 00:25:33.952 cpu : usr=31.36%, sys=2.15%, ctx=1238, majf=0, minf=9 00:25:33.952 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.4%, 16=15.9%, 32=0.0%, >=64=0.0% 00:25:33.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.952 complete : 0=0.0%, 4=87.3%, 8=12.4%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.952 issued rwts: total=2017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.952 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:33.952 filename2: (groupid=0, jobs=1): err= 0: pid=83930: Wed Nov 20 05:35:46 2024 00:25:33.952 read: IOPS=206, BW=825KiB/s (845kB/s)(8304KiB/10060msec) 00:25:33.952 slat (usec): min=4, max=8037, avg=26.56, stdev=278.15 00:25:33.952 clat (msec): min=2, max=158, avg=77.26, stdev=28.06 00:25:33.952 lat (msec): min=2, max=158, avg=77.29, stdev=28.06 00:25:33.952 clat percentiles (msec): 00:25:33.952 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 48], 20.00th=[ 57], 00:25:33.952 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 79], 60.00th=[ 83], 00:25:33.952 | 70.00th=[ 89], 80.00th=[ 103], 90.00th=[ 113], 95.00th=[ 121], 00:25:33.952 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 144], 99.95th=[ 144], 00:25:33.952 | 99.99th=[ 159] 00:25:33.952 bw ( KiB/s): min= 560, max= 1768, per=4.39%, avg=823.60, stdev=249.84, samples=20 00:25:33.952 iops : min= 140, max= 442, avg=205.90, stdev=62.46, samples=20 00:25:33.952 lat (msec) : 4=2.41%, 10=2.79%, 20=0.87%, 50=7.95%, 100=63.73% 00:25:33.952 lat (msec) : 250=22.25% 00:25:33.952 cpu : usr=39.16%, sys=2.58%, ctx=1236, majf=0, minf=0 00:25:33.952 IO depths : 1=0.1%, 2=0.8%, 4=2.8%, 8=80.3%, 16=16.0%, 32=0.0%, >=64=0.0% 00:25:33.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.952 complete : 0=0.0%, 4=88.0%, 8=11.3%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.952 issued rwts: total=2076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.952 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:33.952 filename2: (groupid=0, jobs=1): err= 0: pid=83931: Wed Nov 20 05:35:46 2024 00:25:33.952 read: IOPS=204, BW=817KiB/s (836kB/s)(8172KiB/10007msec) 00:25:33.952 slat (usec): min=6, max=4039, avg=17.90, stdev=89.18 00:25:33.952 clat (msec): min=15, max=144, avg=78.27, stdev=23.28 00:25:33.952 lat (msec): min=15, max=144, avg=78.29, stdev=23.28 00:25:33.952 clat percentiles (msec): 00:25:33.952 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 56], 00:25:33.952 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 82], 00:25:33.952 | 70.00th=[ 86], 80.00th=[ 101], 90.00th=[ 112], 95.00th=[ 121], 00:25:33.952 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:25:33.952 | 99.99th=[ 144] 00:25:33.952 bw ( KiB/s): min= 616, max= 928, per=4.28%, avg=801.58, stdev=114.90, samples=19 00:25:33.952 iops : min= 154, max= 232, avg=200.37, stdev=28.74, samples=19 00:25:33.952 lat (msec) : 20=0.29%, 50=11.36%, 100=68.43%, 250=19.92% 00:25:33.952 cpu : usr=38.53%, sys=2.31%, ctx=1206, majf=0, minf=9 00:25:33.952 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=83.3%, 16=15.5%, 32=0.0%, >=64=0.0% 00:25:33.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.952 complete : 0=0.0%, 4=86.8%, 8=13.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.952 issued rwts: total=2043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.952 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:33.952 filename2: (groupid=0, jobs=1): err= 0: pid=83932: Wed Nov 20 05:35:46 2024 00:25:33.952 read: IOPS=188, BW=755KiB/s (773kB/s)(7580KiB/10044msec) 00:25:33.952 slat (usec): min=7, max=8038, avg=24.91, stdev=260.60 00:25:33.952 clat (msec): min=38, max=156, avg=84.56, stdev=24.38 00:25:33.952 lat (msec): min=38, max=156, avg=84.59, stdev=24.38 00:25:33.952 clat percentiles (msec): 00:25:33.952 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 61], 00:25:33.952 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 82], 60.00th=[ 85], 00:25:33.952 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 131], 00:25:33.952 | 99.00th=[ 146], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:25:33.952 | 99.99th=[ 157] 00:25:33.952 bw ( KiB/s): min= 400, max= 896, per=4.01%, avg=751.60, stdev=138.31, samples=20 00:25:33.952 iops : min= 100, max= 224, avg=187.90, stdev=34.58, samples=20 00:25:33.952 lat (msec) : 50=7.60%, 100=65.59%, 250=26.81% 00:25:33.952 cpu : usr=34.29%, sys=2.12%, ctx=959, majf=0, minf=9 00:25:33.952 IO depths : 1=0.1%, 2=1.9%, 4=7.5%, 8=75.4%, 16=15.1%, 32=0.0%, >=64=0.0% 00:25:33.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.952 complete : 0=0.0%, 4=89.2%, 8=9.1%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.952 issued rwts: total=1895,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.952 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:33.952 filename2: (groupid=0, jobs=1): err= 0: pid=83933: Wed Nov 20 05:35:46 2024 00:25:33.952 read: IOPS=199, BW=797KiB/s (816kB/s)(7968KiB/10003msec) 00:25:33.952 slat (usec): min=5, max=8052, avg=35.72, stdev=381.09 00:25:33.952 clat (msec): min=2, max=147, avg=80.15, stdev=25.59 00:25:33.952 lat (msec): min=2, max=147, avg=80.19, stdev=25.59 00:25:33.952 clat percentiles (msec): 00:25:33.952 | 1.00th=[ 4], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:25:33.952 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 85], 00:25:33.952 | 70.00th=[ 93], 80.00th=[ 107], 90.00th=[ 116], 95.00th=[ 121], 00:25:33.952 | 99.00th=[ 132], 99.50th=[ 134], 99.90th=[ 148], 99.95th=[ 148], 00:25:33.952 | 99.99th=[ 148] 00:25:33.952 bw ( KiB/s): min= 528, max= 920, per=4.09%, avg=766.32, stdev=123.06, samples=19 00:25:33.952 iops : min= 132, max= 230, avg=191.58, stdev=30.76, samples=19 00:25:33.952 lat (msec) : 4=1.00%, 10=0.90%, 20=0.30%, 50=11.50%, 100=62.10% 00:25:33.952 lat (msec) : 250=24.20% 00:25:33.952 cpu : usr=34.57%, sys=2.52%, ctx=1013, majf=0, minf=9 00:25:33.952 IO depths : 1=0.1%, 2=1.2%, 4=4.6%, 8=79.0%, 16=15.3%, 32=0.0%, >=64=0.0% 00:25:33.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.952 complete : 0=0.0%, 4=88.1%, 8=10.9%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.952 issued rwts: total=1992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.952 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:33.952 filename2: (groupid=0, jobs=1): err= 0: pid=83934: Wed Nov 20 05:35:46 2024 00:25:33.952 read: IOPS=183, BW=733KiB/s (750kB/s)(7348KiB/10028msec) 00:25:33.952 slat (usec): min=5, max=8047, avg=27.96, stdev=324.18 00:25:33.952 clat (msec): min=35, max=179, avg=87.06, stdev=25.43 00:25:33.952 lat (msec): min=35, max=179, avg=87.09, stdev=25.42 00:25:33.952 clat percentiles (msec): 00:25:33.952 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 64], 00:25:33.952 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 85], 60.00th=[ 93], 00:25:33.952 | 70.00th=[ 104], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 132], 00:25:33.952 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 180], 99.95th=[ 180], 00:25:33.952 | 99.99th=[ 180] 00:25:33.952 bw ( KiB/s): min= 512, max= 952, per=3.90%, avg=731.25, stdev=152.09, samples=20 00:25:33.952 iops : min= 128, max= 238, avg=182.80, stdev=38.02, samples=20 00:25:33.952 lat (msec) : 50=8.38%, 100=60.42%, 250=31.19% 00:25:33.952 cpu : usr=33.21%, sys=2.20%, ctx=950, majf=0, minf=9 00:25:33.952 IO depths : 1=0.1%, 2=2.8%, 4=11.3%, 8=71.5%, 16=14.4%, 32=0.0%, >=64=0.0% 00:25:33.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.952 complete : 0=0.0%, 4=90.2%, 8=7.4%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.952 issued rwts: total=1837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.952 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:33.952 filename2: (groupid=0, jobs=1): err= 0: pid=83935: Wed Nov 20 05:35:46 2024 00:25:33.952 read: IOPS=203, BW=816KiB/s (835kB/s)(8204KiB/10055msec) 00:25:33.952 slat (usec): min=7, max=8024, avg=25.65, stdev=234.17 00:25:33.952 clat (msec): min=14, max=137, avg=78.22, stdev=23.39 00:25:33.952 lat (msec): min=14, max=137, avg=78.25, stdev=23.39 00:25:33.952 clat percentiles (msec): 00:25:33.952 | 1.00th=[ 20], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 56], 00:25:33.952 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 81], 00:25:33.952 | 70.00th=[ 87], 80.00th=[ 102], 90.00th=[ 113], 95.00th=[ 121], 00:25:33.952 | 99.00th=[ 131], 99.50th=[ 133], 99.90th=[ 136], 99.95th=[ 138], 00:25:33.952 | 99.99th=[ 138] 00:25:33.953 bw ( KiB/s): min= 640, max= 1120, per=4.34%, avg=813.75, stdev=124.18, samples=20 00:25:33.953 iops : min= 160, max= 280, avg=203.40, stdev=31.06, samples=20 00:25:33.953 lat (msec) : 20=1.46%, 50=10.82%, 100=67.43%, 250=20.28% 00:25:33.953 cpu : usr=44.69%, sys=2.81%, ctx=1313, majf=0, minf=9 00:25:33.953 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.9%, 16=15.9%, 32=0.0%, >=64=0.0% 00:25:33.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.953 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.953 issued rwts: total=2051,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.953 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:33.953 00:25:33.953 Run status group 0 (all jobs): 00:25:33.953 READ: bw=18.3MiB/s (19.2MB/s), 723KiB/s-825KiB/s (740kB/s-845kB/s), io=184MiB (193MB), run=10003-10081msec 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:33.953 bdev_null0 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:33.953 [2024-11-20 05:35:46.530799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:33.953 bdev_null1 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:33.953 { 00:25:33.953 "params": { 00:25:33.953 "name": "Nvme$subsystem", 00:25:33.953 "trtype": "$TEST_TRANSPORT", 00:25:33.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:33.953 "adrfam": "ipv4", 00:25:33.953 "trsvcid": "$NVMF_PORT", 00:25:33.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:33.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:33.953 "hdgst": ${hdgst:-false}, 00:25:33.953 "ddgst": ${ddgst:-false} 00:25:33.953 }, 00:25:33.953 "method": "bdev_nvme_attach_controller" 00:25:33.953 } 00:25:33.953 EOF 00:25:33.953 )") 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:33.953 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:25:33.954 05:35:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:25:33.954 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:25:33.954 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:33.954 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:33.954 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:33.954 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:33.954 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:25:33.954 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:33.954 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:33.954 05:35:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:33.954 05:35:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:33.954 { 00:25:33.954 "params": { 00:25:33.954 "name": "Nvme$subsystem", 00:25:33.954 "trtype": "$TEST_TRANSPORT", 00:25:33.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:33.954 "adrfam": "ipv4", 00:25:33.954 "trsvcid": "$NVMF_PORT", 00:25:33.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:33.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:33.954 "hdgst": ${hdgst:-false}, 00:25:33.954 "ddgst": ${ddgst:-false} 00:25:33.954 }, 00:25:33.954 "method": "bdev_nvme_attach_controller" 00:25:33.954 } 00:25:33.954 EOF 00:25:33.954 )") 00:25:33.954 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:33.954 05:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:33.954 05:35:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:25:33.954 05:35:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:25:33.954 05:35:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:25:33.954 05:35:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:33.954 "params": { 00:25:33.954 "name": "Nvme0", 00:25:33.954 "trtype": "tcp", 00:25:33.954 "traddr": "10.0.0.3", 00:25:33.954 "adrfam": "ipv4", 00:25:33.954 "trsvcid": "4420", 00:25:33.954 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:33.954 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:33.954 "hdgst": false, 00:25:33.954 "ddgst": false 00:25:33.954 }, 00:25:33.954 "method": "bdev_nvme_attach_controller" 00:25:33.954 },{ 00:25:33.954 "params": { 00:25:33.954 "name": "Nvme1", 00:25:33.954 "trtype": "tcp", 00:25:33.954 "traddr": "10.0.0.3", 00:25:33.954 "adrfam": "ipv4", 00:25:33.954 "trsvcid": "4420", 00:25:33.954 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.954 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:33.954 "hdgst": false, 00:25:33.954 "ddgst": false 00:25:33.954 }, 00:25:33.954 "method": "bdev_nvme_attach_controller" 00:25:33.954 }' 00:25:33.954 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:33.954 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:33.954 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:33.954 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:33.954 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:25:33.954 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:33.954 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:33.954 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:33.954 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:33.954 05:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:33.954 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:33.954 ... 00:25:33.954 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:33.954 ... 00:25:33.954 fio-3.35 00:25:33.954 Starting 4 threads 00:25:38.138 00:25:38.138 filename0: (groupid=0, jobs=1): err= 0: pid=84072: Wed Nov 20 05:35:52 2024 00:25:38.138 read: IOPS=1890, BW=14.8MiB/s (15.5MB/s)(73.8MiB/5001msec) 00:25:38.138 slat (nsec): min=3840, max=55116, avg=14809.71, stdev=4811.48 00:25:38.138 clat (usec): min=764, max=7963, avg=4187.55, stdev=1103.85 00:25:38.138 lat (usec): min=775, max=7977, avg=4202.36, stdev=1103.64 00:25:38.138 clat percentiles (usec): 00:25:38.138 | 1.00th=[ 1434], 5.00th=[ 2089], 10.00th=[ 3326], 20.00th=[ 3425], 00:25:38.138 | 30.00th=[ 3458], 40.00th=[ 3785], 50.00th=[ 4047], 60.00th=[ 4359], 00:25:38.138 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5407], 95.00th=[ 6128], 00:25:38.138 | 99.00th=[ 6456], 99.50th=[ 6587], 99.90th=[ 7046], 99.95th=[ 7635], 00:25:38.138 | 99.99th=[ 7963] 00:25:38.138 bw ( KiB/s): min=11632, max=18560, per=23.81%, avg=15078.22, stdev=2227.07, samples=9 00:25:38.138 iops : min= 1454, max= 2320, avg=1884.78, stdev=278.38, samples=9 00:25:38.138 lat (usec) : 1000=0.23% 00:25:38.138 lat (msec) : 2=4.55%, 4=42.63%, 10=52.59% 00:25:38.138 cpu : usr=90.98%, sys=7.96%, ctx=60, majf=0, minf=1 00:25:38.138 IO depths : 1=0.1%, 2=7.8%, 4=63.2%, 8=29.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:38.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.138 complete : 0=0.0%, 4=97.0%, 8=3.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.138 issued rwts: total=9452,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.138 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:38.138 filename0: (groupid=0, jobs=1): err= 0: pid=84073: Wed Nov 20 05:35:52 2024 00:25:38.138 read: IOPS=2017, BW=15.8MiB/s (16.5MB/s)(78.8MiB/5002msec) 00:25:38.138 slat (nsec): min=4454, max=55562, avg=13234.46, stdev=5455.19 00:25:38.138 clat (usec): min=693, max=8702, avg=3927.60, stdev=923.61 00:25:38.138 lat (usec): min=702, max=8717, avg=3940.83, stdev=923.37 00:25:38.138 clat percentiles (usec): 00:25:38.138 | 1.00th=[ 1532], 5.00th=[ 2474], 10.00th=[ 2769], 20.00th=[ 3392], 00:25:38.138 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3851], 60.00th=[ 4113], 00:25:38.138 | 70.00th=[ 4359], 80.00th=[ 4883], 90.00th=[ 5211], 95.00th=[ 5342], 00:25:38.138 | 99.00th=[ 5932], 99.50th=[ 6456], 99.90th=[ 7111], 99.95th=[ 7635], 00:25:38.138 | 99.99th=[ 8225] 00:25:38.138 bw ( KiB/s): min=14896, max=17504, per=25.33%, avg=16037.33, stdev=859.92, samples=9 00:25:38.138 iops : min= 1862, max= 2188, avg=2004.67, stdev=107.49, samples=9 00:25:38.138 lat (usec) : 750=0.06%, 1000=0.02% 00:25:38.138 lat (msec) : 2=1.98%, 4=53.48%, 10=44.46% 00:25:38.138 cpu : usr=90.48%, sys=8.30%, ctx=47, majf=0, minf=0 00:25:38.138 IO depths : 1=0.1%, 2=4.4%, 4=65.5%, 8=30.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:38.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.138 complete : 0=0.0%, 4=98.3%, 8=1.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.138 issued rwts: total=10091,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.138 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:38.138 filename1: (groupid=0, jobs=1): err= 0: pid=84074: Wed Nov 20 05:35:52 2024 00:25:38.138 read: IOPS=2005, BW=15.7MiB/s (16.4MB/s)(78.4MiB/5001msec) 00:25:38.138 slat (nsec): min=5953, max=67977, avg=17609.14, stdev=5229.63 00:25:38.138 clat (usec): min=992, max=8696, avg=3939.14, stdev=901.10 00:25:38.138 lat (usec): min=1005, max=8711, avg=3956.75, stdev=900.03 00:25:38.138 clat percentiles (usec): 00:25:38.138 | 1.00th=[ 1696], 5.00th=[ 2540], 10.00th=[ 2769], 20.00th=[ 3392], 00:25:38.138 | 30.00th=[ 3425], 40.00th=[ 3490], 50.00th=[ 3851], 60.00th=[ 4146], 00:25:38.138 | 70.00th=[ 4359], 80.00th=[ 4883], 90.00th=[ 5145], 95.00th=[ 5342], 00:25:38.138 | 99.00th=[ 5866], 99.50th=[ 6456], 99.90th=[ 7046], 99.95th=[ 7635], 00:25:38.138 | 99.99th=[ 8225] 00:25:38.138 bw ( KiB/s): min=14960, max=17392, per=25.42%, avg=16097.78, stdev=804.21, samples=9 00:25:38.138 iops : min= 1870, max= 2174, avg=2012.22, stdev=100.53, samples=9 00:25:38.138 lat (usec) : 1000=0.01% 00:25:38.138 lat (msec) : 2=1.50%, 4=53.76%, 10=44.73% 00:25:38.138 cpu : usr=91.18%, sys=7.54%, ctx=9, majf=0, minf=0 00:25:38.138 IO depths : 1=0.1%, 2=4.9%, 4=65.3%, 8=29.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:38.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.138 complete : 0=0.0%, 4=98.1%, 8=1.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.138 issued rwts: total=10031,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.138 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:38.138 filename1: (groupid=0, jobs=1): err= 0: pid=84075: Wed Nov 20 05:35:52 2024 00:25:38.138 read: IOPS=2003, BW=15.6MiB/s (16.4MB/s)(78.3MiB/5001msec) 00:25:38.138 slat (nsec): min=4849, max=63814, avg=17184.09, stdev=5279.10 00:25:38.138 clat (usec): min=1027, max=8688, avg=3945.75, stdev=897.18 00:25:38.138 lat (usec): min=1036, max=8715, avg=3962.93, stdev=896.53 00:25:38.138 clat percentiles (usec): 00:25:38.138 | 1.00th=[ 1696], 5.00th=[ 2540], 10.00th=[ 2802], 20.00th=[ 3392], 00:25:38.138 | 30.00th=[ 3425], 40.00th=[ 3490], 50.00th=[ 3884], 60.00th=[ 4178], 00:25:38.138 | 70.00th=[ 4359], 80.00th=[ 4883], 90.00th=[ 5145], 95.00th=[ 5342], 00:25:38.138 | 99.00th=[ 5866], 99.50th=[ 6456], 99.90th=[ 7046], 99.95th=[ 7635], 00:25:38.138 | 99.99th=[ 8225] 00:25:38.138 bw ( KiB/s): min=14896, max=17504, per=25.40%, avg=16085.44, stdev=833.60, samples=9 00:25:38.138 iops : min= 1862, max= 2188, avg=2010.67, stdev=104.19, samples=9 00:25:38.138 lat (msec) : 2=1.38%, 4=53.82%, 10=44.80% 00:25:38.138 cpu : usr=90.50%, sys=8.22%, ctx=7, majf=0, minf=0 00:25:38.138 IO depths : 1=0.1%, 2=4.9%, 4=65.3%, 8=29.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:38.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.138 complete : 0=0.0%, 4=98.1%, 8=1.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.138 issued rwts: total=10018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.138 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:38.138 00:25:38.138 Run status group 0 (all jobs): 00:25:38.138 READ: bw=61.8MiB/s (64.8MB/s), 14.8MiB/s-15.8MiB/s (15.5MB/s-16.5MB/s), io=309MiB (324MB), run=5001-5002msec 00:25:38.138 05:35:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:25:38.138 05:35:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:38.138 05:35:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:38.138 05:35:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:38.138 05:35:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:38.138 05:35:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:38.138 05:35:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.138 05:35:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:38.138 05:35:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.138 05:35:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:38.138 05:35:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.138 05:35:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:38.138 05:35:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.138 05:35:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:38.138 05:35:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:38.138 05:35:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:25:38.138 05:35:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:38.138 05:35:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.138 05:35:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:38.138 05:35:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.138 05:35:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:38.138 05:35:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.138 05:35:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:38.138 05:35:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.138 00:25:38.139 real 0m23.400s 00:25:38.139 user 2m2.000s 00:25:38.139 sys 0m9.611s 00:25:38.139 05:35:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:38.139 05:35:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:38.139 ************************************ 00:25:38.139 END TEST fio_dif_rand_params 00:25:38.139 ************************************ 00:25:38.139 05:35:52 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:25:38.139 05:35:52 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:38.139 05:35:52 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:38.139 05:35:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:38.139 ************************************ 00:25:38.139 START TEST fio_dif_digest 00:25:38.139 ************************************ 00:25:38.139 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:25:38.139 05:35:52 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:25:38.139 05:35:52 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:25:38.139 05:35:52 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:25:38.139 05:35:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:25:38.139 05:35:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:25:38.139 05:35:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:25:38.139 05:35:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:25:38.139 05:35:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:25:38.139 05:35:52 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:25:38.139 05:35:52 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:25:38.139 05:35:52 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:25:38.139 05:35:52 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:25:38.139 05:35:52 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:25:38.139 05:35:52 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:25:38.139 05:35:52 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:25:38.139 05:35:52 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:38.139 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:38.398 bdev_null0 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:38.398 [2024-11-20 05:35:52.679117] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:38.398 { 00:25:38.398 "params": { 00:25:38.398 "name": "Nvme$subsystem", 00:25:38.398 "trtype": "$TEST_TRANSPORT", 00:25:38.398 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:38.398 "adrfam": "ipv4", 00:25:38.398 "trsvcid": "$NVMF_PORT", 00:25:38.398 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:38.398 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:38.398 "hdgst": ${hdgst:-false}, 00:25:38.398 "ddgst": ${ddgst:-false} 00:25:38.398 }, 00:25:38.398 "method": "bdev_nvme_attach_controller" 00:25:38.398 } 00:25:38.398 EOF 00:25:38.398 )") 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:38.398 "params": { 00:25:38.398 "name": "Nvme0", 00:25:38.398 "trtype": "tcp", 00:25:38.398 "traddr": "10.0.0.3", 00:25:38.398 "adrfam": "ipv4", 00:25:38.398 "trsvcid": "4420", 00:25:38.398 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:38.398 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:38.398 "hdgst": true, 00:25:38.398 "ddgst": true 00:25:38.398 }, 00:25:38.398 "method": "bdev_nvme_attach_controller" 00:25:38.398 }' 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:38.398 05:35:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:38.658 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:38.658 ... 00:25:38.658 fio-3.35 00:25:38.658 Starting 3 threads 00:25:50.886 00:25:50.886 filename0: (groupid=0, jobs=1): err= 0: pid=84181: Wed Nov 20 05:36:03 2024 00:25:50.886 read: IOPS=211, BW=26.4MiB/s (27.7MB/s)(264MiB/10004msec) 00:25:50.886 slat (nsec): min=3832, max=66623, avg=17329.71, stdev=5721.25 00:25:50.886 clat (usec): min=11972, max=20554, avg=14167.57, stdev=1178.63 00:25:50.886 lat (usec): min=11988, max=20573, avg=14184.90, stdev=1178.89 00:25:50.886 clat percentiles (usec): 00:25:50.886 | 1.00th=[13435], 5.00th=[13435], 10.00th=[13435], 20.00th=[13435], 00:25:50.886 | 30.00th=[13435], 40.00th=[13566], 50.00th=[13566], 60.00th=[13698], 00:25:50.886 | 70.00th=[13960], 80.00th=[14615], 90.00th=[16057], 95.00th=[17171], 00:25:50.886 | 99.00th=[17957], 99.50th=[17957], 99.90th=[20579], 99.95th=[20579], 00:25:50.886 | 99.99th=[20579] 00:25:50.886 bw ( KiB/s): min=23808, max=28416, per=33.62%, avg=27243.79, stdev=1314.56, samples=19 00:25:50.886 iops : min= 186, max= 222, avg=212.84, stdev=10.27, samples=19 00:25:50.886 lat (msec) : 20=99.86%, 50=0.14% 00:25:50.886 cpu : usr=91.15%, sys=7.99%, ctx=10, majf=0, minf=0 00:25:50.886 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:50.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.886 issued rwts: total=2112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.886 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:50.886 filename0: (groupid=0, jobs=1): err= 0: pid=84182: Wed Nov 20 05:36:03 2024 00:25:50.886 read: IOPS=211, BW=26.4MiB/s (27.7MB/s)(264MiB/10004msec) 00:25:50.886 slat (nsec): min=4749, max=46684, avg=16661.92, stdev=5225.23 00:25:50.886 clat (usec): min=11954, max=20549, avg=14170.25, stdev=1176.69 00:25:50.886 lat (usec): min=11969, max=20568, avg=14186.91, stdev=1177.59 00:25:50.886 clat percentiles (usec): 00:25:50.886 | 1.00th=[13435], 5.00th=[13435], 10.00th=[13435], 20.00th=[13435], 00:25:50.886 | 30.00th=[13435], 40.00th=[13566], 50.00th=[13566], 60.00th=[13698], 00:25:50.886 | 70.00th=[13960], 80.00th=[14615], 90.00th=[16057], 95.00th=[17171], 00:25:50.886 | 99.00th=[17957], 99.50th=[17957], 99.90th=[20579], 99.95th=[20579], 00:25:50.886 | 99.99th=[20579] 00:25:50.886 bw ( KiB/s): min=23808, max=28416, per=33.62%, avg=27243.79, stdev=1314.56, samples=19 00:25:50.886 iops : min= 186, max= 222, avg=212.84, stdev=10.27, samples=19 00:25:50.886 lat (msec) : 20=99.86%, 50=0.14% 00:25:50.886 cpu : usr=90.81%, sys=8.39%, ctx=8, majf=0, minf=0 00:25:50.886 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:50.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.886 issued rwts: total=2112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.886 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:50.886 filename0: (groupid=0, jobs=1): err= 0: pid=84183: Wed Nov 20 05:36:03 2024 00:25:50.886 read: IOPS=211, BW=26.4MiB/s (27.7MB/s)(264MiB/10007msec) 00:25:50.886 slat (nsec): min=6917, max=89123, avg=13747.43, stdev=8107.30 00:25:50.886 clat (usec): min=13370, max=20752, avg=14177.19, stdev=1177.43 00:25:50.886 lat (usec): min=13379, max=20781, avg=14190.94, stdev=1177.94 00:25:50.886 clat percentiles (usec): 00:25:50.886 | 1.00th=[13435], 5.00th=[13435], 10.00th=[13435], 20.00th=[13435], 00:25:50.886 | 30.00th=[13435], 40.00th=[13566], 50.00th=[13566], 60.00th=[13698], 00:25:50.886 | 70.00th=[13960], 80.00th=[14615], 90.00th=[16188], 95.00th=[16909], 00:25:50.886 | 99.00th=[17957], 99.50th=[18220], 99.90th=[20841], 99.95th=[20841], 00:25:50.886 | 99.99th=[20841] 00:25:50.886 bw ( KiB/s): min=23808, max=28416, per=33.62%, avg=27243.79, stdev=1314.56, samples=19 00:25:50.886 iops : min= 186, max= 222, avg=212.84, stdev=10.27, samples=19 00:25:50.886 lat (msec) : 20=99.86%, 50=0.14% 00:25:50.886 cpu : usr=90.44%, sys=8.76%, ctx=19, majf=0, minf=0 00:25:50.886 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:50.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.886 issued rwts: total=2112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.886 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:50.886 00:25:50.886 Run status group 0 (all jobs): 00:25:50.886 READ: bw=79.1MiB/s (83.0MB/s), 26.4MiB/s-26.4MiB/s (27.7MB/s-27.7MB/s), io=792MiB (830MB), run=10004-10007msec 00:25:50.886 05:36:03 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:25:50.886 05:36:03 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:25:50.886 05:36:03 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:25:50.886 05:36:03 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:50.886 05:36:03 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:25:50.886 05:36:03 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:50.886 05:36:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.886 05:36:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:50.886 05:36:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.886 05:36:03 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:50.886 05:36:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.886 05:36:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:50.886 05:36:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.886 00:25:50.886 real 0m10.941s 00:25:50.886 user 0m27.867s 00:25:50.886 sys 0m2.750s 00:25:50.886 05:36:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:50.886 05:36:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:50.886 ************************************ 00:25:50.886 END TEST fio_dif_digest 00:25:50.886 ************************************ 00:25:50.886 05:36:03 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:25:50.886 05:36:03 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:25:50.886 05:36:03 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:50.886 05:36:03 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:25:50.886 05:36:03 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:50.886 05:36:03 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:25:50.886 05:36:03 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:50.886 05:36:03 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:50.886 rmmod nvme_tcp 00:25:50.886 rmmod nvme_fabrics 00:25:50.886 rmmod nvme_keyring 00:25:50.886 05:36:03 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:50.887 05:36:03 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:25:50.887 05:36:03 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:25:50.887 05:36:03 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 83434 ']' 00:25:50.887 05:36:03 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 83434 00:25:50.887 05:36:03 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 83434 ']' 00:25:50.887 05:36:03 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 83434 00:25:50.887 05:36:03 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:25:50.887 05:36:03 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:50.887 05:36:03 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83434 00:25:50.887 05:36:03 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:50.887 05:36:03 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:50.887 killing process with pid 83434 00:25:50.887 05:36:03 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83434' 00:25:50.887 05:36:03 nvmf_dif -- common/autotest_common.sh@971 -- # kill 83434 00:25:50.887 05:36:03 nvmf_dif -- common/autotest_common.sh@976 -- # wait 83434 00:25:50.887 05:36:03 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:25:50.887 05:36:03 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:50.887 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:50.887 Waiting for block devices as requested 00:25:50.887 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:50.887 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:50.887 05:36:04 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:50.887 05:36:04 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:50.887 05:36:04 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:25:50.887 05:36:04 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:25:50.887 05:36:04 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:50.887 05:36:04 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:25:50.887 05:36:04 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:50.887 05:36:04 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:50.887 05:36:04 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:50.887 05:36:04 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:50.887 05:36:04 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:50.887 05:36:04 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:50.887 05:36:04 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:50.887 05:36:04 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:50.887 05:36:04 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:50.887 05:36:04 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:50.887 05:36:04 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:50.887 05:36:04 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:50.887 05:36:04 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:50.887 05:36:04 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:50.887 05:36:04 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:50.887 05:36:04 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:50.887 05:36:04 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.887 05:36:04 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:50.887 05:36:04 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.887 05:36:04 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:25:50.887 00:25:50.887 real 0m59.638s 00:25:50.887 user 3m45.129s 00:25:50.887 sys 0m21.152s 00:25:50.887 05:36:04 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:50.887 05:36:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:50.887 ************************************ 00:25:50.887 END TEST nvmf_dif 00:25:50.887 ************************************ 00:25:50.887 05:36:04 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:25:50.887 05:36:04 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:50.887 05:36:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:50.887 05:36:04 -- common/autotest_common.sh@10 -- # set +x 00:25:50.887 ************************************ 00:25:50.887 START TEST nvmf_abort_qd_sizes 00:25:50.887 ************************************ 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:25:50.887 * Looking for test storage... 00:25:50.887 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:50.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.887 --rc genhtml_branch_coverage=1 00:25:50.887 --rc genhtml_function_coverage=1 00:25:50.887 --rc genhtml_legend=1 00:25:50.887 --rc geninfo_all_blocks=1 00:25:50.887 --rc geninfo_unexecuted_blocks=1 00:25:50.887 00:25:50.887 ' 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:50.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.887 --rc genhtml_branch_coverage=1 00:25:50.887 --rc genhtml_function_coverage=1 00:25:50.887 --rc genhtml_legend=1 00:25:50.887 --rc geninfo_all_blocks=1 00:25:50.887 --rc geninfo_unexecuted_blocks=1 00:25:50.887 00:25:50.887 ' 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:50.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.887 --rc genhtml_branch_coverage=1 00:25:50.887 --rc genhtml_function_coverage=1 00:25:50.887 --rc genhtml_legend=1 00:25:50.887 --rc geninfo_all_blocks=1 00:25:50.887 --rc geninfo_unexecuted_blocks=1 00:25:50.887 00:25:50.887 ' 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:50.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.887 --rc genhtml_branch_coverage=1 00:25:50.887 --rc genhtml_function_coverage=1 00:25:50.887 --rc genhtml_legend=1 00:25:50.887 --rc geninfo_all_blocks=1 00:25:50.887 --rc geninfo_unexecuted_blocks=1 00:25:50.887 00:25:50.887 ' 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:50.887 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:50.888 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:50.888 Cannot find device "nvmf_init_br" 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:50.888 Cannot find device "nvmf_init_br2" 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:50.888 Cannot find device "nvmf_tgt_br" 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:50.888 Cannot find device "nvmf_tgt_br2" 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:50.888 Cannot find device "nvmf_init_br" 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:50.888 Cannot find device "nvmf_init_br2" 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:50.888 Cannot find device "nvmf_tgt_br" 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:25:50.888 05:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:50.888 Cannot find device "nvmf_tgt_br2" 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:50.888 Cannot find device "nvmf_br" 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:50.888 Cannot find device "nvmf_init_if" 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:50.888 Cannot find device "nvmf_init_if2" 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:50.888 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:50.888 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:50.888 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:50.889 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:50.889 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:50.889 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:50.889 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:50.889 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:50.889 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:50.889 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:50.889 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:50.889 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:25:50.889 00:25:50.889 --- 10.0.0.3 ping statistics --- 00:25:50.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.889 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:25:50.889 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:50.889 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:50.889 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:25:50.889 00:25:50.889 --- 10.0.0.4 ping statistics --- 00:25:50.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.889 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:25:50.889 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:50.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:50.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:25:50.889 00:25:50.889 --- 10.0.0.1 ping statistics --- 00:25:50.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.889 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:25:50.889 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:50.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:50.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:25:50.889 00:25:50.889 --- 10.0.0.2 ping statistics --- 00:25:50.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.889 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:25:50.889 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:50.889 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:25:50.889 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:25:50.889 05:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:51.456 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:51.715 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:51.715 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:51.715 05:36:06 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:51.715 05:36:06 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:51.715 05:36:06 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:51.715 05:36:06 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:51.715 05:36:06 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:51.715 05:36:06 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:51.715 05:36:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:25:51.715 05:36:06 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:51.715 05:36:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:51.715 05:36:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:51.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:51.715 05:36:06 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84827 00:25:51.715 05:36:06 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:25:51.715 05:36:06 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84827 00:25:51.715 05:36:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 84827 ']' 00:25:51.715 05:36:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.715 05:36:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:51.715 05:36:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.715 05:36:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:51.715 05:36:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:51.715 [2024-11-20 05:36:06.164475] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:25:51.715 [2024-11-20 05:36:06.164564] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:51.973 [2024-11-20 05:36:06.311337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:51.973 [2024-11-20 05:36:06.351651] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:51.973 [2024-11-20 05:36:06.351718] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:51.973 [2024-11-20 05:36:06.351734] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:51.973 [2024-11-20 05:36:06.351745] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:51.973 [2024-11-20 05:36:06.351754] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:51.973 [2024-11-20 05:36:06.352684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:51.973 [2024-11-20 05:36:06.355942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:51.973 [2024-11-20 05:36:06.356051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:51.973 [2024-11-20 05:36:06.356066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.974 [2024-11-20 05:36:06.388901] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:51.974 05:36:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:51.974 05:36:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:25:51.974 05:36:06 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:51.974 05:36:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:51.974 05:36:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:52.233 05:36:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:52.233 ************************************ 00:25:52.233 START TEST spdk_target_abort 00:25:52.233 ************************************ 00:25:52.233 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:25:52.233 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:25:52.233 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:25:52.233 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.233 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:52.233 spdk_targetn1 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:52.234 [2024-11-20 05:36:06.622540] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:52.234 [2024-11-20 05:36:06.662384] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:52.234 05:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:55.530 Initializing NVMe Controllers 00:25:55.530 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:25:55.530 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:55.530 Initialization complete. Launching workers. 00:25:55.530 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11456, failed: 0 00:25:55.530 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1026, failed to submit 10430 00:25:55.530 success 742, unsuccessful 284, failed 0 00:25:55.530 05:36:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:55.530 05:36:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:59.719 Initializing NVMe Controllers 00:25:59.719 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:25:59.719 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:59.719 Initialization complete. Launching workers. 00:25:59.719 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8934, failed: 0 00:25:59.719 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1143, failed to submit 7791 00:25:59.719 success 389, unsuccessful 754, failed 0 00:25:59.719 05:36:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:59.719 05:36:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:02.252 Initializing NVMe Controllers 00:26:02.252 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:26:02.252 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:02.252 Initialization complete. Launching workers. 00:26:02.252 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30403, failed: 0 00:26:02.252 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2221, failed to submit 28182 00:26:02.252 success 406, unsuccessful 1815, failed 0 00:26:02.252 05:36:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:26:02.252 05:36:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.252 05:36:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:02.252 05:36:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.252 05:36:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:02.252 05:36:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.252 05:36:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84827 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 84827 ']' 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 84827 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84827 00:26:02.819 killing process with pid 84827 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84827' 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 84827 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 84827 00:26:02.819 ************************************ 00:26:02.819 END TEST spdk_target_abort 00:26:02.819 ************************************ 00:26:02.819 00:26:02.819 real 0m10.708s 00:26:02.819 user 0m41.089s 00:26:02.819 sys 0m2.100s 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:02.819 05:36:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:26:02.819 05:36:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:02.819 05:36:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:02.819 05:36:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:02.819 ************************************ 00:26:02.819 START TEST kernel_target_abort 00:26:02.819 ************************************ 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:02.819 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:03.077 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:03.077 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:03.335 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:03.335 Waiting for block devices as requested 00:26:03.335 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:03.335 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:03.594 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:03.594 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:03.594 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:03.594 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:03.594 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:03.594 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:03.594 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:03.594 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:03.594 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:03.594 No valid GPT data, bailing 00:26:03.594 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:03.594 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:26:03.594 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:26:03.594 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:03.594 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:03.594 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:26:03.594 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:26:03.594 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:26:03.594 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:26:03.594 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:03.594 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:26:03.594 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:26:03.594 05:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:26:03.594 No valid GPT data, bailing 00:26:03.594 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:26:03.594 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:26:03.594 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:26:03.594 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:26:03.594 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:03.594 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:26:03.594 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:26:03.594 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:26:03.594 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:26:03.594 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:03.594 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:26:03.594 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:26:03.594 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:26:03.594 No valid GPT data, bailing 00:26:03.594 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:26:03.853 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:26:03.853 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:26:03.853 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:26:03.853 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:03.853 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:03.853 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:26:03.853 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:26:03.853 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:26:03.853 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:03.853 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:26:03.853 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:26:03.853 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:03.853 No valid GPT data, bailing 00:26:03.853 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:03.853 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:26:03.853 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:26:03.853 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:26:03.853 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:26:03.853 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 --hostid=4bd82fc4-6e19-4d22-95c5-23a13095cd93 -a 10.0.0.1 -t tcp -s 4420 00:26:03.854 00:26:03.854 Discovery Log Number of Records 2, Generation counter 2 00:26:03.854 =====Discovery Log Entry 0====== 00:26:03.854 trtype: tcp 00:26:03.854 adrfam: ipv4 00:26:03.854 subtype: current discovery subsystem 00:26:03.854 treq: not specified, sq flow control disable supported 00:26:03.854 portid: 1 00:26:03.854 trsvcid: 4420 00:26:03.854 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:03.854 traddr: 10.0.0.1 00:26:03.854 eflags: none 00:26:03.854 sectype: none 00:26:03.854 =====Discovery Log Entry 1====== 00:26:03.854 trtype: tcp 00:26:03.854 adrfam: ipv4 00:26:03.854 subtype: nvme subsystem 00:26:03.854 treq: not specified, sq flow control disable supported 00:26:03.854 portid: 1 00:26:03.854 trsvcid: 4420 00:26:03.854 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:03.854 traddr: 10.0.0.1 00:26:03.854 eflags: none 00:26:03.854 sectype: none 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:03.854 05:36:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:07.164 Initializing NVMe Controllers 00:26:07.164 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:07.164 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:07.164 Initialization complete. Launching workers. 00:26:07.164 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33645, failed: 0 00:26:07.164 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33645, failed to submit 0 00:26:07.164 success 0, unsuccessful 33645, failed 0 00:26:07.164 05:36:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:07.164 05:36:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:10.449 Initializing NVMe Controllers 00:26:10.449 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:10.449 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:10.449 Initialization complete. Launching workers. 00:26:10.449 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 69160, failed: 0 00:26:10.449 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30018, failed to submit 39142 00:26:10.449 success 0, unsuccessful 30018, failed 0 00:26:10.449 05:36:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:10.449 05:36:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:13.735 Initializing NVMe Controllers 00:26:13.735 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:13.735 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:13.735 Initialization complete. Launching workers. 00:26:13.735 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 76472, failed: 0 00:26:13.735 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19090, failed to submit 57382 00:26:13.735 success 0, unsuccessful 19090, failed 0 00:26:13.735 05:36:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:26:13.735 05:36:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:13.735 05:36:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:26:13.735 05:36:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:13.735 05:36:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:13.735 05:36:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:13.735 05:36:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:13.735 05:36:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:13.735 05:36:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:13.735 05:36:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:14.302 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:16.204 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:16.204 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:16.204 00:26:16.204 real 0m13.056s 00:26:16.204 user 0m6.547s 00:26:16.204 sys 0m4.003s 00:26:16.204 05:36:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:16.204 05:36:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:16.204 ************************************ 00:26:16.204 END TEST kernel_target_abort 00:26:16.204 ************************************ 00:26:16.204 05:36:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:16.204 05:36:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:26:16.204 05:36:30 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:16.204 05:36:30 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:26:16.204 05:36:30 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:16.204 05:36:30 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:26:16.204 05:36:30 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:16.204 05:36:30 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:16.204 rmmod nvme_tcp 00:26:16.204 rmmod nvme_fabrics 00:26:16.204 rmmod nvme_keyring 00:26:16.204 05:36:30 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:16.204 05:36:30 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:26:16.204 05:36:30 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:26:16.204 05:36:30 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84827 ']' 00:26:16.204 Process with pid 84827 is not found 00:26:16.204 05:36:30 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84827 00:26:16.204 05:36:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 84827 ']' 00:26:16.204 05:36:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 84827 00:26:16.204 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (84827) - No such process 00:26:16.204 05:36:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 84827 is not found' 00:26:16.204 05:36:30 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:26:16.204 05:36:30 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:16.462 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:16.462 Waiting for block devices as requested 00:26:16.462 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:16.720 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:16.720 05:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:16.720 05:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:16.720 05:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:26:16.720 05:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:26:16.720 05:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:16.720 05:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:26:16.720 05:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:16.720 05:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:16.720 05:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:16.720 05:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:16.720 05:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:16.720 05:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:16.720 05:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:16.720 05:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:16.720 05:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:16.720 05:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:16.720 05:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:16.720 05:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:16.720 05:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:16.978 05:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:16.978 05:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:16.978 05:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:16.978 05:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.978 05:36:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:16.978 05:36:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.978 05:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:26:16.978 00:26:16.978 real 0m26.628s 00:26:16.978 user 0m48.773s 00:26:16.978 sys 0m7.431s 00:26:16.978 ************************************ 00:26:16.978 END TEST nvmf_abort_qd_sizes 00:26:16.978 ************************************ 00:26:16.978 05:36:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:16.978 05:36:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:16.978 05:36:31 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:26:16.978 05:36:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:16.978 05:36:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:16.978 05:36:31 -- common/autotest_common.sh@10 -- # set +x 00:26:16.978 ************************************ 00:26:16.978 START TEST keyring_file 00:26:16.978 ************************************ 00:26:16.978 05:36:31 keyring_file -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:26:16.978 * Looking for test storage... 00:26:16.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:26:16.978 05:36:31 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:16.978 05:36:31 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:26:16.978 05:36:31 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:17.238 05:36:31 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@345 -- # : 1 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@353 -- # local d=1 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@355 -- # echo 1 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@353 -- # local d=2 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@355 -- # echo 2 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@368 -- # return 0 00:26:17.238 05:36:31 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:17.238 05:36:31 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:17.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.238 --rc genhtml_branch_coverage=1 00:26:17.238 --rc genhtml_function_coverage=1 00:26:17.238 --rc genhtml_legend=1 00:26:17.238 --rc geninfo_all_blocks=1 00:26:17.238 --rc geninfo_unexecuted_blocks=1 00:26:17.238 00:26:17.238 ' 00:26:17.238 05:36:31 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:17.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.238 --rc genhtml_branch_coverage=1 00:26:17.238 --rc genhtml_function_coverage=1 00:26:17.238 --rc genhtml_legend=1 00:26:17.238 --rc geninfo_all_blocks=1 00:26:17.238 --rc geninfo_unexecuted_blocks=1 00:26:17.238 00:26:17.238 ' 00:26:17.238 05:36:31 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:17.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.238 --rc genhtml_branch_coverage=1 00:26:17.238 --rc genhtml_function_coverage=1 00:26:17.238 --rc genhtml_legend=1 00:26:17.238 --rc geninfo_all_blocks=1 00:26:17.238 --rc geninfo_unexecuted_blocks=1 00:26:17.238 00:26:17.238 ' 00:26:17.238 05:36:31 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:17.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.238 --rc genhtml_branch_coverage=1 00:26:17.238 --rc genhtml_function_coverage=1 00:26:17.238 --rc genhtml_legend=1 00:26:17.238 --rc geninfo_all_blocks=1 00:26:17.238 --rc geninfo_unexecuted_blocks=1 00:26:17.238 00:26:17.238 ' 00:26:17.238 05:36:31 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:26:17.238 05:36:31 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:17.238 05:36:31 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:26:17.238 05:36:31 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:17.238 05:36:31 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:17.238 05:36:31 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:17.238 05:36:31 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:17.238 05:36:31 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:17.238 05:36:31 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:17.238 05:36:31 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:17.238 05:36:31 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:17.238 05:36:31 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:17.238 05:36:31 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:17.238 05:36:31 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:26:17.238 05:36:31 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:26:17.238 05:36:31 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:17.238 05:36:31 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:17.238 05:36:31 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:17.238 05:36:31 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:17.238 05:36:31 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:17.238 05:36:31 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:17.238 05:36:31 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.238 05:36:31 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.238 05:36:31 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.238 05:36:31 keyring_file -- paths/export.sh@5 -- # export PATH 00:26:17.238 05:36:31 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.238 05:36:31 keyring_file -- nvmf/common.sh@51 -- # : 0 00:26:17.238 05:36:31 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:17.238 05:36:31 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:17.238 05:36:31 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:17.238 05:36:31 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:17.238 05:36:31 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:17.238 05:36:31 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:17.238 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:17.238 05:36:31 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:17.238 05:36:31 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:17.238 05:36:31 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:17.239 05:36:31 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:26:17.239 05:36:31 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:26:17.239 05:36:31 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:26:17.239 05:36:31 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:26:17.239 05:36:31 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:26:17.239 05:36:31 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:26:17.239 05:36:31 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:26:17.239 05:36:31 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:17.239 05:36:31 keyring_file -- keyring/common.sh@17 -- # name=key0 00:26:17.239 05:36:31 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:17.239 05:36:31 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:17.239 05:36:31 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:17.239 05:36:31 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.NDAKxq08qg 00:26:17.239 05:36:31 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:17.239 05:36:31 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:17.239 05:36:31 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:26:17.239 05:36:31 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:26:17.239 05:36:31 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:26:17.239 05:36:31 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:26:17.239 05:36:31 keyring_file -- nvmf/common.sh@733 -- # python - 00:26:17.239 05:36:31 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.NDAKxq08qg 00:26:17.239 05:36:31 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.NDAKxq08qg 00:26:17.239 05:36:31 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.NDAKxq08qg 00:26:17.239 05:36:31 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:26:17.239 05:36:31 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:17.239 05:36:31 keyring_file -- keyring/common.sh@17 -- # name=key1 00:26:17.239 05:36:31 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:26:17.239 05:36:31 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:17.239 05:36:31 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:17.239 05:36:31 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.eKavq1cjYz 00:26:17.239 05:36:31 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:26:17.239 05:36:31 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:26:17.239 05:36:31 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:26:17.239 05:36:31 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:26:17.239 05:36:31 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:26:17.239 05:36:31 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:26:17.239 05:36:31 keyring_file -- nvmf/common.sh@733 -- # python - 00:26:17.239 05:36:31 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.eKavq1cjYz 00:26:17.239 05:36:31 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.eKavq1cjYz 00:26:17.239 05:36:31 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.eKavq1cjYz 00:26:17.239 05:36:31 keyring_file -- keyring/file.sh@30 -- # tgtpid=85731 00:26:17.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:17.239 05:36:31 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85731 00:26:17.239 05:36:31 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 85731 ']' 00:26:17.239 05:36:31 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:17.239 05:36:31 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:17.239 05:36:31 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:17.239 05:36:31 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:17.239 05:36:31 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:17.239 05:36:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:17.498 [2024-11-20 05:36:31.778567] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:26:17.498 [2024-11-20 05:36:31.778671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85731 ] 00:26:17.498 [2024-11-20 05:36:31.927712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.498 [2024-11-20 05:36:31.966890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.756 [2024-11-20 05:36:32.012723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:17.756 05:36:32 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:17.756 05:36:32 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:26:17.756 05:36:32 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:26:17.756 05:36:32 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.756 05:36:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:17.756 [2024-11-20 05:36:32.147972] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:17.756 null0 00:26:17.756 [2024-11-20 05:36:32.179923] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:17.756 [2024-11-20 05:36:32.180129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:17.756 05:36:32 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.756 05:36:32 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:17.756 05:36:32 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:26:17.756 05:36:32 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:17.756 05:36:32 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:17.756 05:36:32 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:17.756 05:36:32 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:17.756 05:36:32 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:17.756 05:36:32 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:17.756 05:36:32 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.756 05:36:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:17.756 [2024-11-20 05:36:32.207859] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:26:17.756 request: 00:26:17.756 { 00:26:17.756 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:26:17.756 "secure_channel": false, 00:26:17.756 "listen_address": { 00:26:17.756 "trtype": "tcp", 00:26:17.756 "traddr": "127.0.0.1", 00:26:17.756 "trsvcid": "4420" 00:26:17.756 }, 00:26:17.756 "method": "nvmf_subsystem_add_listener", 00:26:17.756 "req_id": 1 00:26:17.756 } 00:26:17.756 Got JSON-RPC error response 00:26:17.756 response: 00:26:17.756 { 00:26:17.756 "code": -32602, 00:26:17.756 "message": "Invalid parameters" 00:26:17.756 } 00:26:17.756 05:36:32 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:17.756 05:36:32 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:26:17.756 05:36:32 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:17.756 05:36:32 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:17.756 05:36:32 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:17.756 05:36:32 keyring_file -- keyring/file.sh@47 -- # bperfpid=85735 00:26:17.756 05:36:32 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:26:17.756 05:36:32 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85735 /var/tmp/bperf.sock 00:26:17.756 05:36:32 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 85735 ']' 00:26:17.756 05:36:32 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:17.756 05:36:32 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:17.756 05:36:32 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:17.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:17.756 05:36:32 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:17.756 05:36:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:17.756 [2024-11-20 05:36:32.266487] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:26:17.756 [2024-11-20 05:36:32.266713] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85735 ] 00:26:18.014 [2024-11-20 05:36:32.415890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.014 [2024-11-20 05:36:32.454089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.015 [2024-11-20 05:36:32.486649] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:18.273 05:36:32 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:18.273 05:36:32 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:26:18.273 05:36:32 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NDAKxq08qg 00:26:18.273 05:36:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NDAKxq08qg 00:26:18.531 05:36:32 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.eKavq1cjYz 00:26:18.531 05:36:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.eKavq1cjYz 00:26:18.789 05:36:33 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:26:18.789 05:36:33 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:26:18.789 05:36:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:18.789 05:36:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:18.789 05:36:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:19.048 05:36:33 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.NDAKxq08qg == \/\t\m\p\/\t\m\p\.\N\D\A\K\x\q\0\8\q\g ]] 00:26:19.048 05:36:33 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:26:19.048 05:36:33 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:26:19.048 05:36:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:19.048 05:36:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:19.048 05:36:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:19.307 05:36:33 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.eKavq1cjYz == \/\t\m\p\/\t\m\p\.\e\K\a\v\q\1\c\j\Y\z ]] 00:26:19.307 05:36:33 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:26:19.307 05:36:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:19.307 05:36:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:19.307 05:36:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:19.307 05:36:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:19.307 05:36:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:19.565 05:36:33 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:26:19.565 05:36:33 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:26:19.565 05:36:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:19.565 05:36:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:19.565 05:36:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:19.565 05:36:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:19.565 05:36:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:19.823 05:36:34 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:26:19.823 05:36:34 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:19.823 05:36:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:20.081 [2024-11-20 05:36:34.461348] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:20.081 nvme0n1 00:26:20.081 05:36:34 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:26:20.081 05:36:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:20.081 05:36:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:20.081 05:36:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:20.081 05:36:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:20.081 05:36:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:20.339 05:36:34 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:26:20.339 05:36:34 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:26:20.339 05:36:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:20.339 05:36:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:20.339 05:36:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:20.339 05:36:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:20.339 05:36:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:20.906 05:36:35 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:26:20.906 05:36:35 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:20.906 Running I/O for 1 seconds... 00:26:21.842 10833.00 IOPS, 42.32 MiB/s 00:26:21.842 Latency(us) 00:26:21.842 [2024-11-20T05:36:36.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.842 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:26:21.842 nvme0n1 : 1.01 10881.92 42.51 0.00 0.00 11726.87 5153.51 24069.59 00:26:21.842 [2024-11-20T05:36:36.355Z] =================================================================================================================== 00:26:21.842 [2024-11-20T05:36:36.355Z] Total : 10881.92 42.51 0.00 0.00 11726.87 5153.51 24069.59 00:26:21.842 { 00:26:21.842 "results": [ 00:26:21.842 { 00:26:21.842 "job": "nvme0n1", 00:26:21.842 "core_mask": "0x2", 00:26:21.842 "workload": "randrw", 00:26:21.842 "percentage": 50, 00:26:21.842 "status": "finished", 00:26:21.842 "queue_depth": 128, 00:26:21.842 "io_size": 4096, 00:26:21.842 "runtime": 1.007451, 00:26:21.842 "iops": 10881.918822850937, 00:26:21.842 "mibps": 42.507495401761474, 00:26:21.842 "io_failed": 0, 00:26:21.842 "io_timeout": 0, 00:26:21.842 "avg_latency_us": 11726.868413257818, 00:26:21.842 "min_latency_us": 5153.512727272728, 00:26:21.842 "max_latency_us": 24069.585454545453 00:26:21.842 } 00:26:21.842 ], 00:26:21.842 "core_count": 1 00:26:21.842 } 00:26:21.842 05:36:36 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:26:21.842 05:36:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:26:22.101 05:36:36 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:26:22.101 05:36:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:22.101 05:36:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:22.101 05:36:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:22.101 05:36:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:22.101 05:36:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:22.669 05:36:36 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:26:22.669 05:36:36 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:26:22.669 05:36:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:22.669 05:36:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:22.669 05:36:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:22.669 05:36:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:22.669 05:36:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:22.929 05:36:37 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:26:22.929 05:36:37 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:22.929 05:36:37 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:26:22.929 05:36:37 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:22.929 05:36:37 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:26:22.929 05:36:37 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:22.929 05:36:37 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:26:22.929 05:36:37 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:22.929 05:36:37 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:22.929 05:36:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:23.188 [2024-11-20 05:36:37.444764] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spd[2024-11-20 05:36:37.444763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfaae0 (107): Transport endpoint is not connected 00:26:23.188 k_sock_recv() failed, errno 107: Transport endpoint is not connected 00:26:23.188 [2024-11-20 05:36:37.445751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfaae0 (9): Bad file descriptor 00:26:23.188 [2024-11-20 05:36:37.446747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:26:23.188 [2024-11-20 05:36:37.446763] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:26:23.188 [2024-11-20 05:36:37.446774] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:26:23.188 [2024-11-20 05:36:37.446785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:26:23.188 request: 00:26:23.188 { 00:26:23.188 "name": "nvme0", 00:26:23.188 "trtype": "tcp", 00:26:23.188 "traddr": "127.0.0.1", 00:26:23.188 "adrfam": "ipv4", 00:26:23.188 "trsvcid": "4420", 00:26:23.188 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:23.188 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:23.188 "prchk_reftag": false, 00:26:23.188 "prchk_guard": false, 00:26:23.188 "hdgst": false, 00:26:23.188 "ddgst": false, 00:26:23.188 "psk": "key1", 00:26:23.188 "allow_unrecognized_csi": false, 00:26:23.188 "method": "bdev_nvme_attach_controller", 00:26:23.188 "req_id": 1 00:26:23.188 } 00:26:23.188 Got JSON-RPC error response 00:26:23.188 response: 00:26:23.188 { 00:26:23.188 "code": -5, 00:26:23.188 "message": "Input/output error" 00:26:23.188 } 00:26:23.188 05:36:37 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:26:23.188 05:36:37 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:23.188 05:36:37 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:23.188 05:36:37 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:23.188 05:36:37 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:26:23.188 05:36:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:23.188 05:36:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:23.188 05:36:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:23.188 05:36:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:23.188 05:36:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:23.447 05:36:37 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:26:23.447 05:36:37 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:26:23.447 05:36:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:23.447 05:36:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:23.447 05:36:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:23.447 05:36:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:23.447 05:36:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:23.706 05:36:38 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:26:23.706 05:36:38 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:26:23.706 05:36:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:24.030 05:36:38 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:26:24.030 05:36:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:26:24.312 05:36:38 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:26:24.312 05:36:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:24.312 05:36:38 keyring_file -- keyring/file.sh@78 -- # jq length 00:26:24.570 05:36:38 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:26:24.570 05:36:38 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.NDAKxq08qg 00:26:24.570 05:36:38 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.NDAKxq08qg 00:26:24.570 05:36:38 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:26:24.570 05:36:38 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.NDAKxq08qg 00:26:24.570 05:36:38 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:26:24.571 05:36:38 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:24.571 05:36:38 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:26:24.571 05:36:38 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:24.571 05:36:38 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NDAKxq08qg 00:26:24.571 05:36:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NDAKxq08qg 00:26:24.833 [2024-11-20 05:36:39.167933] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.NDAKxq08qg': 0100660 00:26:24.833 [2024-11-20 05:36:39.168183] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:26:24.833 request: 00:26:24.833 { 00:26:24.833 "name": "key0", 00:26:24.833 "path": "/tmp/tmp.NDAKxq08qg", 00:26:24.833 "method": "keyring_file_add_key", 00:26:24.833 "req_id": 1 00:26:24.833 } 00:26:24.833 Got JSON-RPC error response 00:26:24.833 response: 00:26:24.833 { 00:26:24.833 "code": -1, 00:26:24.833 "message": "Operation not permitted" 00:26:24.833 } 00:26:24.833 05:36:39 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:26:24.833 05:36:39 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:24.833 05:36:39 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:24.833 05:36:39 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:24.833 05:36:39 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.NDAKxq08qg 00:26:24.833 05:36:39 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NDAKxq08qg 00:26:24.833 05:36:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NDAKxq08qg 00:26:25.093 05:36:39 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.NDAKxq08qg 00:26:25.093 05:36:39 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:26:25.093 05:36:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:25.093 05:36:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:25.093 05:36:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:25.093 05:36:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:25.093 05:36:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:25.351 05:36:39 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:26:25.351 05:36:39 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:25.351 05:36:39 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:26:25.351 05:36:39 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:25.351 05:36:39 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:26:25.351 05:36:39 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:25.351 05:36:39 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:26:25.351 05:36:39 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:25.351 05:36:39 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:25.351 05:36:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:25.610 [2024-11-20 05:36:39.968116] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.NDAKxq08qg': No such file or directory 00:26:25.610 [2024-11-20 05:36:39.968171] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:26:25.610 [2024-11-20 05:36:39.968200] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:26:25.610 [2024-11-20 05:36:39.968215] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:26:25.610 [2024-11-20 05:36:39.968227] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:25.610 [2024-11-20 05:36:39.968237] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:26:25.610 request: 00:26:25.610 { 00:26:25.610 "name": "nvme0", 00:26:25.610 "trtype": "tcp", 00:26:25.610 "traddr": "127.0.0.1", 00:26:25.610 "adrfam": "ipv4", 00:26:25.610 "trsvcid": "4420", 00:26:25.610 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:25.610 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:25.610 "prchk_reftag": false, 00:26:25.610 "prchk_guard": false, 00:26:25.610 "hdgst": false, 00:26:25.610 "ddgst": false, 00:26:25.610 "psk": "key0", 00:26:25.610 "allow_unrecognized_csi": false, 00:26:25.610 "method": "bdev_nvme_attach_controller", 00:26:25.610 "req_id": 1 00:26:25.610 } 00:26:25.610 Got JSON-RPC error response 00:26:25.610 response: 00:26:25.610 { 00:26:25.610 "code": -19, 00:26:25.610 "message": "No such device" 00:26:25.610 } 00:26:25.610 05:36:39 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:26:25.610 05:36:39 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:25.610 05:36:39 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:25.610 05:36:39 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:25.610 05:36:39 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:26:25.610 05:36:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:25.868 05:36:40 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:26:25.868 05:36:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:25.868 05:36:40 keyring_file -- keyring/common.sh@17 -- # name=key0 00:26:25.868 05:36:40 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:25.868 05:36:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:25.868 05:36:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:25.868 05:36:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.S7bGnh9t4t 00:26:25.868 05:36:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:25.868 05:36:40 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:25.868 05:36:40 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:26:25.868 05:36:40 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:26:25.868 05:36:40 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:26:25.868 05:36:40 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:26:25.868 05:36:40 keyring_file -- nvmf/common.sh@733 -- # python - 00:26:25.868 05:36:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.S7bGnh9t4t 00:26:25.868 05:36:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.S7bGnh9t4t 00:26:25.868 05:36:40 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.S7bGnh9t4t 00:26:25.868 05:36:40 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.S7bGnh9t4t 00:26:25.868 05:36:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.S7bGnh9t4t 00:26:26.127 05:36:40 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:26.127 05:36:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:26.694 nvme0n1 00:26:26.694 05:36:40 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:26:26.694 05:36:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:26.694 05:36:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:26.694 05:36:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:26.694 05:36:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:26.694 05:36:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:26.694 05:36:41 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:26:26.694 05:36:41 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:26:26.694 05:36:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:26.953 05:36:41 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:26:26.953 05:36:41 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:26:26.953 05:36:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:26.953 05:36:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:26.953 05:36:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:27.521 05:36:41 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:26:27.521 05:36:41 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:26:27.521 05:36:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:27.521 05:36:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:27.521 05:36:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:27.521 05:36:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:27.521 05:36:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:27.780 05:36:42 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:26:27.780 05:36:42 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:26:27.780 05:36:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:26:28.040 05:36:42 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:26:28.040 05:36:42 keyring_file -- keyring/file.sh@105 -- # jq length 00:26:28.040 05:36:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:28.299 05:36:42 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:26:28.299 05:36:42 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.S7bGnh9t4t 00:26:28.299 05:36:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.S7bGnh9t4t 00:26:28.558 05:36:42 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.eKavq1cjYz 00:26:28.558 05:36:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.eKavq1cjYz 00:26:28.817 05:36:43 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:28.817 05:36:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:29.076 nvme0n1 00:26:29.334 05:36:43 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:26:29.334 05:36:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:26:29.593 05:36:43 keyring_file -- keyring/file.sh@113 -- # config='{ 00:26:29.593 "subsystems": [ 00:26:29.593 { 00:26:29.593 "subsystem": "keyring", 00:26:29.593 "config": [ 00:26:29.593 { 00:26:29.593 "method": "keyring_file_add_key", 00:26:29.593 "params": { 00:26:29.593 "name": "key0", 00:26:29.593 "path": "/tmp/tmp.S7bGnh9t4t" 00:26:29.593 } 00:26:29.593 }, 00:26:29.593 { 00:26:29.593 "method": "keyring_file_add_key", 00:26:29.593 "params": { 00:26:29.593 "name": "key1", 00:26:29.593 "path": "/tmp/tmp.eKavq1cjYz" 00:26:29.593 } 00:26:29.593 } 00:26:29.593 ] 00:26:29.593 }, 00:26:29.593 { 00:26:29.593 "subsystem": "iobuf", 00:26:29.593 "config": [ 00:26:29.593 { 00:26:29.593 "method": "iobuf_set_options", 00:26:29.593 "params": { 00:26:29.593 "small_pool_count": 8192, 00:26:29.593 "large_pool_count": 1024, 00:26:29.593 "small_bufsize": 8192, 00:26:29.593 "large_bufsize": 135168, 00:26:29.593 "enable_numa": false 00:26:29.593 } 00:26:29.593 } 00:26:29.593 ] 00:26:29.593 }, 00:26:29.593 { 00:26:29.593 "subsystem": "sock", 00:26:29.593 "config": [ 00:26:29.593 { 00:26:29.593 "method": "sock_set_default_impl", 00:26:29.593 "params": { 00:26:29.593 "impl_name": "uring" 00:26:29.593 } 00:26:29.593 }, 00:26:29.593 { 00:26:29.593 "method": "sock_impl_set_options", 00:26:29.593 "params": { 00:26:29.593 "impl_name": "ssl", 00:26:29.593 "recv_buf_size": 4096, 00:26:29.593 "send_buf_size": 4096, 00:26:29.593 "enable_recv_pipe": true, 00:26:29.593 "enable_quickack": false, 00:26:29.593 "enable_placement_id": 0, 00:26:29.593 "enable_zerocopy_send_server": true, 00:26:29.593 "enable_zerocopy_send_client": false, 00:26:29.593 "zerocopy_threshold": 0, 00:26:29.593 "tls_version": 0, 00:26:29.593 "enable_ktls": false 00:26:29.593 } 00:26:29.593 }, 00:26:29.593 { 00:26:29.593 "method": "sock_impl_set_options", 00:26:29.593 "params": { 00:26:29.593 "impl_name": "posix", 00:26:29.594 "recv_buf_size": 2097152, 00:26:29.594 "send_buf_size": 2097152, 00:26:29.594 "enable_recv_pipe": true, 00:26:29.594 "enable_quickack": false, 00:26:29.594 "enable_placement_id": 0, 00:26:29.594 "enable_zerocopy_send_server": true, 00:26:29.594 "enable_zerocopy_send_client": false, 00:26:29.594 "zerocopy_threshold": 0, 00:26:29.594 "tls_version": 0, 00:26:29.594 "enable_ktls": false 00:26:29.594 } 00:26:29.594 }, 00:26:29.594 { 00:26:29.594 "method": "sock_impl_set_options", 00:26:29.594 "params": { 00:26:29.594 "impl_name": "uring", 00:26:29.594 "recv_buf_size": 2097152, 00:26:29.594 "send_buf_size": 2097152, 00:26:29.594 "enable_recv_pipe": true, 00:26:29.594 "enable_quickack": false, 00:26:29.594 "enable_placement_id": 0, 00:26:29.594 "enable_zerocopy_send_server": false, 00:26:29.594 "enable_zerocopy_send_client": false, 00:26:29.594 "zerocopy_threshold": 0, 00:26:29.594 "tls_version": 0, 00:26:29.594 "enable_ktls": false 00:26:29.594 } 00:26:29.594 } 00:26:29.594 ] 00:26:29.594 }, 00:26:29.594 { 00:26:29.594 "subsystem": "vmd", 00:26:29.594 "config": [] 00:26:29.594 }, 00:26:29.594 { 00:26:29.594 "subsystem": "accel", 00:26:29.594 "config": [ 00:26:29.594 { 00:26:29.594 "method": "accel_set_options", 00:26:29.594 "params": { 00:26:29.594 "small_cache_size": 128, 00:26:29.594 "large_cache_size": 16, 00:26:29.594 "task_count": 2048, 00:26:29.594 "sequence_count": 2048, 00:26:29.594 "buf_count": 2048 00:26:29.594 } 00:26:29.594 } 00:26:29.594 ] 00:26:29.594 }, 00:26:29.594 { 00:26:29.594 "subsystem": "bdev", 00:26:29.594 "config": [ 00:26:29.594 { 00:26:29.594 "method": "bdev_set_options", 00:26:29.594 "params": { 00:26:29.594 "bdev_io_pool_size": 65535, 00:26:29.594 "bdev_io_cache_size": 256, 00:26:29.594 "bdev_auto_examine": true, 00:26:29.594 "iobuf_small_cache_size": 128, 00:26:29.594 "iobuf_large_cache_size": 16 00:26:29.594 } 00:26:29.594 }, 00:26:29.594 { 00:26:29.594 "method": "bdev_raid_set_options", 00:26:29.594 "params": { 00:26:29.594 "process_window_size_kb": 1024, 00:26:29.594 "process_max_bandwidth_mb_sec": 0 00:26:29.594 } 00:26:29.594 }, 00:26:29.594 { 00:26:29.594 "method": "bdev_iscsi_set_options", 00:26:29.594 "params": { 00:26:29.594 "timeout_sec": 30 00:26:29.594 } 00:26:29.594 }, 00:26:29.594 { 00:26:29.594 "method": "bdev_nvme_set_options", 00:26:29.594 "params": { 00:26:29.594 "action_on_timeout": "none", 00:26:29.594 "timeout_us": 0, 00:26:29.594 "timeout_admin_us": 0, 00:26:29.594 "keep_alive_timeout_ms": 10000, 00:26:29.594 "arbitration_burst": 0, 00:26:29.594 "low_priority_weight": 0, 00:26:29.594 "medium_priority_weight": 0, 00:26:29.594 "high_priority_weight": 0, 00:26:29.594 "nvme_adminq_poll_period_us": 10000, 00:26:29.594 "nvme_ioq_poll_period_us": 0, 00:26:29.594 "io_queue_requests": 512, 00:26:29.594 "delay_cmd_submit": true, 00:26:29.594 "transport_retry_count": 4, 00:26:29.594 "bdev_retry_count": 3, 00:26:29.594 "transport_ack_timeout": 0, 00:26:29.594 "ctrlr_loss_timeout_sec": 0, 00:26:29.594 "reconnect_delay_sec": 0, 00:26:29.594 "fast_io_fail_timeout_sec": 0, 00:26:29.594 "disable_auto_failback": false, 00:26:29.594 "generate_uuids": false, 00:26:29.594 "transport_tos": 0, 00:26:29.594 "nvme_error_stat": false, 00:26:29.594 "rdma_srq_size": 0, 00:26:29.594 "io_path_stat": false, 00:26:29.594 "allow_accel_sequence": false, 00:26:29.594 "rdma_max_cq_size": 0, 00:26:29.594 "rdma_cm_event_timeout_ms": 0, 00:26:29.594 "dhchap_digests": [ 00:26:29.594 "sha256", 00:26:29.594 "sha384", 00:26:29.594 "sha512" 00:26:29.594 ], 00:26:29.594 "dhchap_dhgroups": [ 00:26:29.594 "null", 00:26:29.594 "ffdhe2048", 00:26:29.594 "ffdhe3072", 00:26:29.594 "ffdhe4096", 00:26:29.594 "ffdhe6144", 00:26:29.594 "ffdhe8192" 00:26:29.594 ] 00:26:29.594 } 00:26:29.594 }, 00:26:29.594 { 00:26:29.594 "method": "bdev_nvme_attach_controller", 00:26:29.594 "params": { 00:26:29.594 "name": "nvme0", 00:26:29.594 "trtype": "TCP", 00:26:29.594 "adrfam": "IPv4", 00:26:29.594 "traddr": "127.0.0.1", 00:26:29.594 "trsvcid": "4420", 00:26:29.594 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:29.594 "prchk_reftag": false, 00:26:29.594 "prchk_guard": false, 00:26:29.594 "ctrlr_loss_timeout_sec": 0, 00:26:29.594 "reconnect_delay_sec": 0, 00:26:29.594 "fast_io_fail_timeout_sec": 0, 00:26:29.594 "psk": "key0", 00:26:29.594 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:29.594 "hdgst": false, 00:26:29.594 "ddgst": false, 00:26:29.594 "multipath": "multipath" 00:26:29.594 } 00:26:29.594 }, 00:26:29.594 { 00:26:29.594 "method": "bdev_nvme_set_hotplug", 00:26:29.594 "params": { 00:26:29.594 "period_us": 100000, 00:26:29.594 "enable": false 00:26:29.594 } 00:26:29.594 }, 00:26:29.594 { 00:26:29.594 "method": "bdev_wait_for_examine" 00:26:29.594 } 00:26:29.594 ] 00:26:29.594 }, 00:26:29.594 { 00:26:29.594 "subsystem": "nbd", 00:26:29.594 "config": [] 00:26:29.594 } 00:26:29.594 ] 00:26:29.594 }' 00:26:29.594 05:36:43 keyring_file -- keyring/file.sh@115 -- # killprocess 85735 00:26:29.594 05:36:43 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 85735 ']' 00:26:29.594 05:36:43 keyring_file -- common/autotest_common.sh@956 -- # kill -0 85735 00:26:29.594 05:36:43 keyring_file -- common/autotest_common.sh@957 -- # uname 00:26:29.594 05:36:43 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:29.594 05:36:43 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85735 00:26:29.594 killing process with pid 85735 00:26:29.594 Received shutdown signal, test time was about 1.000000 seconds 00:26:29.594 00:26:29.594 Latency(us) 00:26:29.594 [2024-11-20T05:36:44.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.594 [2024-11-20T05:36:44.107Z] =================================================================================================================== 00:26:29.594 [2024-11-20T05:36:44.107Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:29.594 05:36:43 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:29.594 05:36:43 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:29.594 05:36:43 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85735' 00:26:29.594 05:36:43 keyring_file -- common/autotest_common.sh@971 -- # kill 85735 00:26:29.594 05:36:43 keyring_file -- common/autotest_common.sh@976 -- # wait 85735 00:26:29.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:29.853 05:36:44 keyring_file -- keyring/file.sh@118 -- # bperfpid=85991 00:26:29.853 05:36:44 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85991 /var/tmp/bperf.sock 00:26:29.853 05:36:44 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:26:29.853 05:36:44 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:26:29.853 "subsystems": [ 00:26:29.853 { 00:26:29.853 "subsystem": "keyring", 00:26:29.853 "config": [ 00:26:29.853 { 00:26:29.853 "method": "keyring_file_add_key", 00:26:29.853 "params": { 00:26:29.853 "name": "key0", 00:26:29.853 "path": "/tmp/tmp.S7bGnh9t4t" 00:26:29.853 } 00:26:29.853 }, 00:26:29.853 { 00:26:29.853 "method": "keyring_file_add_key", 00:26:29.853 "params": { 00:26:29.853 "name": "key1", 00:26:29.853 "path": "/tmp/tmp.eKavq1cjYz" 00:26:29.853 } 00:26:29.853 } 00:26:29.853 ] 00:26:29.853 }, 00:26:29.853 { 00:26:29.853 "subsystem": "iobuf", 00:26:29.853 "config": [ 00:26:29.853 { 00:26:29.853 "method": "iobuf_set_options", 00:26:29.853 "params": { 00:26:29.853 "small_pool_count": 8192, 00:26:29.853 "large_pool_count": 1024, 00:26:29.853 "small_bufsize": 8192, 00:26:29.853 "large_bufsize": 135168, 00:26:29.853 "enable_numa": false 00:26:29.853 } 00:26:29.853 } 00:26:29.853 ] 00:26:29.853 }, 00:26:29.853 { 00:26:29.853 "subsystem": "sock", 00:26:29.853 "config": [ 00:26:29.853 { 00:26:29.853 "method": "sock_set_default_impl", 00:26:29.853 "params": { 00:26:29.853 "impl_name": "uring" 00:26:29.853 } 00:26:29.853 }, 00:26:29.853 { 00:26:29.853 "method": "sock_impl_set_options", 00:26:29.853 "params": { 00:26:29.853 "impl_name": "ssl", 00:26:29.853 "recv_buf_size": 4096, 00:26:29.853 "send_buf_size": 4096, 00:26:29.853 "enable_recv_pipe": true, 00:26:29.853 "enable_quickack": false, 00:26:29.853 "enable_placement_id": 0, 00:26:29.853 "enable_zerocopy_send_server": true, 00:26:29.853 "enable_zerocopy_send_client": false, 00:26:29.853 "zerocopy_threshold": 0, 00:26:29.853 "tls_version": 0, 00:26:29.853 "enable_ktls": false 00:26:29.853 } 00:26:29.853 }, 00:26:29.853 { 00:26:29.853 "method": "sock_impl_set_options", 00:26:29.853 "params": { 00:26:29.853 "impl_name": "posix", 00:26:29.853 "recv_buf_size": 2097152, 00:26:29.853 "send_buf_size": 2097152, 00:26:29.853 "enable_recv_pipe": true, 00:26:29.853 "enable_quickack": false, 00:26:29.853 "enable_placement_id": 0, 00:26:29.853 "enable_zerocopy_send_server": true, 00:26:29.853 "enable_zerocopy_send_client": false, 00:26:29.853 "zerocopy_threshold": 0, 00:26:29.853 "tls_version": 0, 00:26:29.853 "enable_ktls": false 00:26:29.853 } 00:26:29.853 }, 00:26:29.853 { 00:26:29.853 "method": "sock_impl_set_options", 00:26:29.853 "params": { 00:26:29.853 "impl_name": "uring", 00:26:29.853 "recv_buf_size": 2097152, 00:26:29.854 "send_buf_size": 2097152, 00:26:29.854 "enable_recv_pipe": true, 00:26:29.854 "enable_quickack": false, 00:26:29.854 "enable_placement_id": 0, 00:26:29.854 "enable_zerocopy_send_server": false, 00:26:29.854 "enable_zerocopy_send_client": false, 00:26:29.854 "zerocopy_threshold": 0, 00:26:29.854 "tls_version": 0, 00:26:29.854 "enable_ktls": false 00:26:29.854 } 00:26:29.854 } 00:26:29.854 ] 00:26:29.854 }, 00:26:29.854 { 00:26:29.854 "subsystem": "vmd", 00:26:29.854 "config": [] 00:26:29.854 }, 00:26:29.854 { 00:26:29.854 "subsystem": "accel", 00:26:29.854 "config": [ 00:26:29.854 { 00:26:29.854 "method": "accel_set_options", 00:26:29.854 "params": { 00:26:29.854 "small_cache_size": 128, 00:26:29.854 "large_cache_size": 16, 00:26:29.854 "task_count": 2048, 00:26:29.854 "sequence_count": 2048, 00:26:29.854 "buf_count": 2048 00:26:29.854 } 00:26:29.854 } 00:26:29.854 ] 00:26:29.854 }, 00:26:29.854 { 00:26:29.854 "subsystem": "bdev", 00:26:29.854 "config": [ 00:26:29.854 { 00:26:29.854 "method": "bdev_set_options", 00:26:29.854 "params": { 00:26:29.854 "bdev_io_pool_size": 65535, 00:26:29.854 "bdev_io_cache_size": 256, 00:26:29.854 "bdev_auto_examine": true, 00:26:29.854 "iobuf_small_cache_size": 128, 00:26:29.854 "iobuf_large_cache_size": 16 00:26:29.854 } 00:26:29.854 }, 00:26:29.854 { 00:26:29.854 "method": "bdev_raid_set_options", 00:26:29.854 "params": { 00:26:29.854 "process_window_size_kb": 1024, 00:26:29.854 "process_max_bandwidth_mb_sec": 0 00:26:29.854 } 00:26:29.854 }, 00:26:29.854 { 00:26:29.854 "method": "bdev_iscsi_set_options", 00:26:29.854 "params": { 00:26:29.854 "timeout_sec": 30 00:26:29.854 } 00:26:29.854 }, 00:26:29.854 { 00:26:29.854 "method": "bdev_nvme_set_options", 00:26:29.854 "params": { 00:26:29.854 "action_on_timeout": "none", 00:26:29.854 "timeout_us": 0, 00:26:29.854 "timeout_admin_us": 0, 00:26:29.854 "keep_alive_timeout_ms": 10000, 00:26:29.854 "arbitration_burst": 0, 00:26:29.854 "low_priority_weight": 0, 00:26:29.854 "medium_priority_weight": 0, 00:26:29.854 "high_priority_weight": 0, 00:26:29.854 "nvme_adminq_poll_period_us": 10000, 00:26:29.854 "nvme_ioq_poll_period_us": 0, 00:26:29.854 "io_queue_requests": 512, 00:26:29.854 "delay_cmd_submit": true, 00:26:29.854 "transport_retry_count": 4, 00:26:29.854 "bdev_retry_count": 3, 00:26:29.854 "transport_ack_timeout": 0, 00:26:29.854 "ctrlr_loss_timeout_sec": 0, 00:26:29.854 "reconnect_delay_sec": 0, 00:26:29.854 "fast_io_fail_timeout_sec": 0, 00:26:29.854 "disable_auto_failback": false, 00:26:29.854 "generate_uuids": false, 00:26:29.854 "transport_tos": 0, 00:26:29.854 "nvme_error_stat": false, 00:26:29.854 "rdma_srq_size": 0, 00:26:29.854 "io_path_stat": false, 00:26:29.854 "allow_accel_sequence": false, 00:26:29.854 "rdma_max_cq_size": 0, 00:26:29.854 "rdma_cm_event_timeout_ms": 0, 00:26:29.854 "dhchap_digests": [ 00:26:29.854 "sha256", 00:26:29.854 "sha384", 00:26:29.854 "sha512" 00:26:29.854 ], 00:26:29.854 "dhchap_dhgroups": [ 00:26:29.854 "null", 00:26:29.854 "ffdhe2048", 00:26:29.854 "ffdhe3072", 00:26:29.854 "ffdhe4096", 00:26:29.854 "ffdhe6144", 00:26:29.854 "ffdhe8192" 00:26:29.854 ] 00:26:29.854 } 00:26:29.854 }, 00:26:29.854 { 00:26:29.854 "method": "bdev_nvme_attach_controller", 00:26:29.854 "params": { 00:26:29.854 "name": "nvme0", 00:26:29.854 "trtype": "TCP", 00:26:29.854 "adrfam": "IPv4", 00:26:29.854 "traddr": "127.0.0.1", 00:26:29.854 "trsvcid": "4420", 00:26:29.854 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:29.854 "prchk_reftag": false, 00:26:29.854 "prchk_guard": false, 00:26:29.854 "ctrlr_loss_timeout_sec": 0, 00:26:29.854 "reconnect_delay_sec": 0, 00:26:29.854 "fast_io_fail_timeout_sec": 0, 00:26:29.854 "psk": "key0", 00:26:29.854 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:29.854 "hdgst": false, 00:26:29.854 "ddgst": false, 00:26:29.854 "multipath": "multipath" 00:26:29.854 } 00:26:29.854 }, 00:26:29.854 { 00:26:29.854 "method": "bdev_nvme_set_hotplug", 00:26:29.854 "params": { 00:26:29.854 "period_us": 100000, 00:26:29.854 "enable": false 00:26:29.854 } 00:26:29.854 }, 00:26:29.854 { 00:26:29.854 "method": "bdev_wait_for_examine" 00:26:29.854 } 00:26:29.854 ] 00:26:29.854 }, 00:26:29.854 { 00:26:29.854 "subsystem": "nbd", 00:26:29.854 "config": [] 00:26:29.854 } 00:26:29.854 ] 00:26:29.854 }' 00:26:29.854 05:36:44 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 85991 ']' 00:26:29.854 05:36:44 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:29.854 05:36:44 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:29.854 05:36:44 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:29.854 05:36:44 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:29.854 05:36:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:29.854 [2024-11-20 05:36:44.205323] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:26:29.854 [2024-11-20 05:36:44.205687] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85991 ] 00:26:29.854 [2024-11-20 05:36:44.354218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.113 [2024-11-20 05:36:44.386849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:30.113 [2024-11-20 05:36:44.496901] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:30.113 [2024-11-20 05:36:44.536538] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:31.050 05:36:45 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:31.050 05:36:45 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:26:31.050 05:36:45 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:26:31.050 05:36:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:31.050 05:36:45 keyring_file -- keyring/file.sh@121 -- # jq length 00:26:31.310 05:36:45 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:26:31.310 05:36:45 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:26:31.310 05:36:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:31.310 05:36:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:31.310 05:36:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:31.310 05:36:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:31.310 05:36:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:31.568 05:36:45 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:26:31.568 05:36:45 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:26:31.568 05:36:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:31.568 05:36:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:31.568 05:36:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:31.568 05:36:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:31.568 05:36:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:31.827 05:36:46 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:26:31.827 05:36:46 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:26:31.827 05:36:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:26:31.827 05:36:46 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:26:32.086 05:36:46 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:26:32.086 05:36:46 keyring_file -- keyring/file.sh@1 -- # cleanup 00:26:32.086 05:36:46 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.S7bGnh9t4t /tmp/tmp.eKavq1cjYz 00:26:32.086 05:36:46 keyring_file -- keyring/file.sh@20 -- # killprocess 85991 00:26:32.086 05:36:46 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 85991 ']' 00:26:32.086 05:36:46 keyring_file -- common/autotest_common.sh@956 -- # kill -0 85991 00:26:32.086 05:36:46 keyring_file -- common/autotest_common.sh@957 -- # uname 00:26:32.086 05:36:46 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:32.086 05:36:46 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85991 00:26:32.086 killing process with pid 85991 00:26:32.086 Received shutdown signal, test time was about 1.000000 seconds 00:26:32.086 00:26:32.086 Latency(us) 00:26:32.086 [2024-11-20T05:36:46.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.086 [2024-11-20T05:36:46.599Z] =================================================================================================================== 00:26:32.086 [2024-11-20T05:36:46.599Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:32.086 05:36:46 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:32.086 05:36:46 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:32.086 05:36:46 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85991' 00:26:32.086 05:36:46 keyring_file -- common/autotest_common.sh@971 -- # kill 85991 00:26:32.086 05:36:46 keyring_file -- common/autotest_common.sh@976 -- # wait 85991 00:26:32.345 05:36:46 keyring_file -- keyring/file.sh@21 -- # killprocess 85731 00:26:32.345 05:36:46 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 85731 ']' 00:26:32.345 05:36:46 keyring_file -- common/autotest_common.sh@956 -- # kill -0 85731 00:26:32.345 05:36:46 keyring_file -- common/autotest_common.sh@957 -- # uname 00:26:32.345 05:36:46 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:32.345 05:36:46 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85731 00:26:32.345 killing process with pid 85731 00:26:32.345 05:36:46 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:32.345 05:36:46 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:32.345 05:36:46 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85731' 00:26:32.345 05:36:46 keyring_file -- common/autotest_common.sh@971 -- # kill 85731 00:26:32.345 05:36:46 keyring_file -- common/autotest_common.sh@976 -- # wait 85731 00:26:32.604 00:26:32.604 real 0m15.583s 00:26:32.604 user 0m40.827s 00:26:32.604 sys 0m2.671s 00:26:32.604 ************************************ 00:26:32.604 END TEST keyring_file 00:26:32.604 ************************************ 00:26:32.604 05:36:46 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:32.604 05:36:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:32.604 05:36:46 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:26:32.604 05:36:46 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:26:32.604 05:36:46 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:32.604 05:36:46 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:32.604 05:36:46 -- common/autotest_common.sh@10 -- # set +x 00:26:32.604 ************************************ 00:26:32.604 START TEST keyring_linux 00:26:32.604 ************************************ 00:26:32.604 05:36:47 keyring_linux -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:26:32.604 Joined session keyring: 195754134 00:26:32.604 * Looking for test storage... 00:26:32.604 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:26:32.604 05:36:47 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:32.604 05:36:47 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:32.604 05:36:47 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:26:32.863 05:36:47 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:32.863 05:36:47 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@345 -- # : 1 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@368 -- # return 0 00:26:32.864 05:36:47 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:32.864 05:36:47 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:32.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.864 --rc genhtml_branch_coverage=1 00:26:32.864 --rc genhtml_function_coverage=1 00:26:32.864 --rc genhtml_legend=1 00:26:32.864 --rc geninfo_all_blocks=1 00:26:32.864 --rc geninfo_unexecuted_blocks=1 00:26:32.864 00:26:32.864 ' 00:26:32.864 05:36:47 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:32.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.864 --rc genhtml_branch_coverage=1 00:26:32.864 --rc genhtml_function_coverage=1 00:26:32.864 --rc genhtml_legend=1 00:26:32.864 --rc geninfo_all_blocks=1 00:26:32.864 --rc geninfo_unexecuted_blocks=1 00:26:32.864 00:26:32.864 ' 00:26:32.864 05:36:47 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:32.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.864 --rc genhtml_branch_coverage=1 00:26:32.864 --rc genhtml_function_coverage=1 00:26:32.864 --rc genhtml_legend=1 00:26:32.864 --rc geninfo_all_blocks=1 00:26:32.864 --rc geninfo_unexecuted_blocks=1 00:26:32.864 00:26:32.864 ' 00:26:32.864 05:36:47 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:32.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.864 --rc genhtml_branch_coverage=1 00:26:32.864 --rc genhtml_function_coverage=1 00:26:32.864 --rc genhtml_legend=1 00:26:32.864 --rc geninfo_all_blocks=1 00:26:32.864 --rc geninfo_unexecuted_blocks=1 00:26:32.864 00:26:32.864 ' 00:26:32.864 05:36:47 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:26:32.864 05:36:47 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=4bd82fc4-6e19-4d22-95c5-23a13095cd93 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:32.864 05:36:47 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:32.864 05:36:47 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.864 05:36:47 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.864 05:36:47 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.864 05:36:47 keyring_linux -- paths/export.sh@5 -- # export PATH 00:26:32.864 05:36:47 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:32.864 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:32.864 05:36:47 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:26:32.864 05:36:47 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:26:32.864 05:36:47 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:26:32.864 05:36:47 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:26:32.864 05:36:47 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:26:32.864 05:36:47 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:26:32.864 05:36:47 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:26:32.864 05:36:47 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:26:32.864 05:36:47 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:26:32.864 05:36:47 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:32.864 05:36:47 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:26:32.864 05:36:47 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:26:32.864 05:36:47 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@733 -- # python - 00:26:32.864 05:36:47 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:26:32.864 05:36:47 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:26:32.864 /tmp/:spdk-test:key0 00:26:32.864 05:36:47 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:26:32.864 05:36:47 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:26:32.864 05:36:47 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:26:32.864 05:36:47 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:26:32.864 05:36:47 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:26:32.864 05:36:47 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:26:32.864 05:36:47 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:26:32.864 05:36:47 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:26:32.865 05:36:47 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:26:32.865 05:36:47 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:26:32.865 05:36:47 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:26:32.865 05:36:47 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:26:32.865 05:36:47 keyring_linux -- nvmf/common.sh@733 -- # python - 00:26:32.865 05:36:47 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:26:32.865 /tmp/:spdk-test:key1 00:26:32.865 05:36:47 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:26:32.865 05:36:47 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=86114 00:26:32.865 05:36:47 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:32.865 05:36:47 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 86114 00:26:32.865 05:36:47 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 86114 ']' 00:26:32.865 05:36:47 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.865 05:36:47 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:32.865 05:36:47 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.865 05:36:47 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:32.865 05:36:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:26:33.123 [2024-11-20 05:36:47.394865] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:26:33.123 [2024-11-20 05:36:47.395190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86114 ] 00:26:33.123 [2024-11-20 05:36:47.545968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.123 [2024-11-20 05:36:47.585741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.123 [2024-11-20 05:36:47.630959] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:33.381 05:36:47 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:33.381 05:36:47 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:26:33.381 05:36:47 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:26:33.381 05:36:47 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.381 05:36:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:26:33.381 [2024-11-20 05:36:47.780077] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:33.381 null0 00:26:33.381 [2024-11-20 05:36:47.812022] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:33.381 [2024-11-20 05:36:47.812252] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:33.381 05:36:47 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.381 05:36:47 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:26:33.381 663300265 00:26:33.381 05:36:47 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:26:33.381 33063106 00:26:33.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:33.382 05:36:47 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=86129 00:26:33.382 05:36:47 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 86129 /var/tmp/bperf.sock 00:26:33.382 05:36:47 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:26:33.382 05:36:47 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 86129 ']' 00:26:33.382 05:36:47 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:33.382 05:36:47 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:33.382 05:36:47 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:33.382 05:36:47 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:33.382 05:36:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:26:33.382 [2024-11-20 05:36:47.891895] Starting SPDK v25.01-pre git sha1 866ba5ffe / DPDK 24.03.0 initialization... 00:26:33.640 [2024-11-20 05:36:47.892131] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86129 ] 00:26:33.640 [2024-11-20 05:36:48.039007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.640 [2024-11-20 05:36:48.072045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.640 05:36:48 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:33.640 05:36:48 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:26:33.640 05:36:48 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:26:33.641 05:36:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:26:34.207 05:36:48 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:26:34.207 05:36:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:34.466 [2024-11-20 05:36:48.775432] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:34.466 05:36:48 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:26:34.466 05:36:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:26:34.725 [2024-11-20 05:36:49.052638] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:34.725 nvme0n1 00:26:34.725 05:36:49 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:26:34.725 05:36:49 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:26:34.725 05:36:49 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:26:34.725 05:36:49 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:26:34.725 05:36:49 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:26:34.725 05:36:49 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:35.292 05:36:49 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:26:35.292 05:36:49 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:26:35.292 05:36:49 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:26:35.292 05:36:49 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:26:35.292 05:36:49 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:35.292 05:36:49 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:26:35.292 05:36:49 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:35.550 05:36:49 keyring_linux -- keyring/linux.sh@25 -- # sn=663300265 00:26:35.550 05:36:49 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:26:35.550 05:36:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:26:35.550 05:36:49 keyring_linux -- keyring/linux.sh@26 -- # [[ 663300265 == \6\6\3\3\0\0\2\6\5 ]] 00:26:35.550 05:36:49 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 663300265 00:26:35.550 05:36:49 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:26:35.550 05:36:49 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:35.550 Running I/O for 1 seconds... 00:26:36.485 11769.00 IOPS, 45.97 MiB/s 00:26:36.485 Latency(us) 00:26:36.485 [2024-11-20T05:36:50.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.485 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:36.485 nvme0n1 : 1.01 11772.32 45.99 0.00 0.00 10811.28 9413.35 20137.43 00:26:36.485 [2024-11-20T05:36:50.998Z] =================================================================================================================== 00:26:36.485 [2024-11-20T05:36:50.998Z] Total : 11772.32 45.99 0.00 0.00 10811.28 9413.35 20137.43 00:26:36.485 { 00:26:36.485 "results": [ 00:26:36.485 { 00:26:36.485 "job": "nvme0n1", 00:26:36.485 "core_mask": "0x2", 00:26:36.485 "workload": "randread", 00:26:36.485 "status": "finished", 00:26:36.485 "queue_depth": 128, 00:26:36.485 "io_size": 4096, 00:26:36.485 "runtime": 1.010591, 00:26:36.485 "iops": 11772.319365598942, 00:26:36.485 "mibps": 45.985622521870866, 00:26:36.485 "io_failed": 0, 00:26:36.485 "io_timeout": 0, 00:26:36.485 "avg_latency_us": 10811.284128160652, 00:26:36.485 "min_latency_us": 9413.352727272728, 00:26:36.485 "max_latency_us": 20137.425454545453 00:26:36.485 } 00:26:36.485 ], 00:26:36.485 "core_count": 1 00:26:36.485 } 00:26:36.744 05:36:50 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:26:36.744 05:36:51 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:26:37.002 05:36:51 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:26:37.002 05:36:51 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:26:37.002 05:36:51 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:26:37.002 05:36:51 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:26:37.002 05:36:51 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:37.002 05:36:51 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:26:37.261 05:36:51 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:26:37.261 05:36:51 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:26:37.261 05:36:51 keyring_linux -- keyring/linux.sh@23 -- # return 00:26:37.261 05:36:51 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:26:37.261 05:36:51 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:26:37.261 05:36:51 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:26:37.261 05:36:51 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:26:37.261 05:36:51 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:37.261 05:36:51 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:26:37.261 05:36:51 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:37.261 05:36:51 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:26:37.261 05:36:51 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:26:37.520 [2024-11-20 05:36:51.865739] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:26:37.520 [2024-11-20 05:36:51.866689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd095d0 (107): Transport endpoint is not connected 00:26:37.520 [2024-11-20 05:36:51.867675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd095d0 (9): Bad file descriptor 00:26:37.520 [2024-11-20 05:36:51.868673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:26:37.520 [2024-11-20 05:36:51.868701] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:26:37.520 [2024-11-20 05:36:51.868715] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:26:37.520 [2024-11-20 05:36:51.868728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:26:37.520 request: 00:26:37.520 { 00:26:37.520 "name": "nvme0", 00:26:37.520 "trtype": "tcp", 00:26:37.520 "traddr": "127.0.0.1", 00:26:37.520 "adrfam": "ipv4", 00:26:37.520 "trsvcid": "4420", 00:26:37.520 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:37.520 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:37.520 "prchk_reftag": false, 00:26:37.520 "prchk_guard": false, 00:26:37.520 "hdgst": false, 00:26:37.520 "ddgst": false, 00:26:37.520 "psk": ":spdk-test:key1", 00:26:37.520 "allow_unrecognized_csi": false, 00:26:37.520 "method": "bdev_nvme_attach_controller", 00:26:37.520 "req_id": 1 00:26:37.520 } 00:26:37.520 Got JSON-RPC error response 00:26:37.520 response: 00:26:37.520 { 00:26:37.520 "code": -5, 00:26:37.520 "message": "Input/output error" 00:26:37.520 } 00:26:37.520 05:36:51 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:26:37.520 05:36:51 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:37.520 05:36:51 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:37.520 05:36:51 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:37.520 05:36:51 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:26:37.520 05:36:51 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:26:37.520 05:36:51 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:26:37.520 05:36:51 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:26:37.520 05:36:51 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:26:37.520 05:36:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:26:37.520 05:36:51 keyring_linux -- keyring/linux.sh@33 -- # sn=663300265 00:26:37.520 05:36:51 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 663300265 00:26:37.520 1 links removed 00:26:37.520 05:36:51 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:26:37.520 05:36:51 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:26:37.520 05:36:51 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:26:37.520 05:36:51 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:26:37.520 05:36:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:26:37.520 05:36:51 keyring_linux -- keyring/linux.sh@33 -- # sn=33063106 00:26:37.520 05:36:51 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 33063106 00:26:37.520 1 links removed 00:26:37.520 05:36:51 keyring_linux -- keyring/linux.sh@41 -- # killprocess 86129 00:26:37.520 05:36:51 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 86129 ']' 00:26:37.520 05:36:51 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 86129 00:26:37.520 05:36:51 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:26:37.520 05:36:51 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:37.520 05:36:51 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86129 00:26:37.520 killing process with pid 86129 00:26:37.520 Received shutdown signal, test time was about 1.000000 seconds 00:26:37.520 00:26:37.520 Latency(us) 00:26:37.520 [2024-11-20T05:36:52.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.520 [2024-11-20T05:36:52.033Z] =================================================================================================================== 00:26:37.520 [2024-11-20T05:36:52.033Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:37.520 05:36:51 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:37.520 05:36:51 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:37.520 05:36:51 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86129' 00:26:37.520 05:36:51 keyring_linux -- common/autotest_common.sh@971 -- # kill 86129 00:26:37.520 05:36:51 keyring_linux -- common/autotest_common.sh@976 -- # wait 86129 00:26:37.779 05:36:52 keyring_linux -- keyring/linux.sh@42 -- # killprocess 86114 00:26:37.779 05:36:52 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 86114 ']' 00:26:37.779 05:36:52 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 86114 00:26:37.779 05:36:52 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:26:37.779 05:36:52 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:37.779 05:36:52 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86114 00:26:37.779 killing process with pid 86114 00:26:37.779 05:36:52 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:37.779 05:36:52 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:37.779 05:36:52 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86114' 00:26:37.779 05:36:52 keyring_linux -- common/autotest_common.sh@971 -- # kill 86114 00:26:37.779 05:36:52 keyring_linux -- common/autotest_common.sh@976 -- # wait 86114 00:26:38.038 00:26:38.038 real 0m5.354s 00:26:38.038 user 0m11.257s 00:26:38.038 sys 0m1.367s 00:26:38.038 ************************************ 00:26:38.038 END TEST keyring_linux 00:26:38.038 ************************************ 00:26:38.038 05:36:52 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:38.038 05:36:52 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:26:38.038 05:36:52 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:26:38.038 05:36:52 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:26:38.038 05:36:52 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:26:38.038 05:36:52 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:26:38.038 05:36:52 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:26:38.038 05:36:52 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:26:38.038 05:36:52 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:26:38.038 05:36:52 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:26:38.038 05:36:52 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:26:38.038 05:36:52 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:26:38.038 05:36:52 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:26:38.038 05:36:52 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:26:38.038 05:36:52 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:26:38.038 05:36:52 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:26:38.038 05:36:52 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:26:38.038 05:36:52 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:26:38.038 05:36:52 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:26:38.038 05:36:52 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:38.038 05:36:52 -- common/autotest_common.sh@10 -- # set +x 00:26:38.038 05:36:52 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:26:38.038 05:36:52 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:26:38.038 05:36:52 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:26:38.038 05:36:52 -- common/autotest_common.sh@10 -- # set +x 00:26:39.972 INFO: APP EXITING 00:26:39.972 INFO: killing all VMs 00:26:39.972 INFO: killing vhost app 00:26:39.972 INFO: EXIT DONE 00:26:40.230 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:40.230 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:26:40.230 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:26:40.796 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:41.054 Cleaning 00:26:41.054 Removing: /var/run/dpdk/spdk0/config 00:26:41.054 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:41.054 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:41.054 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:41.054 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:41.054 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:41.054 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:41.054 Removing: /var/run/dpdk/spdk1/config 00:26:41.054 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:26:41.054 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:26:41.054 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:26:41.054 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:26:41.054 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:26:41.054 Removing: /var/run/dpdk/spdk1/hugepage_info 00:26:41.054 Removing: /var/run/dpdk/spdk2/config 00:26:41.054 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:26:41.054 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:26:41.054 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:26:41.054 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:26:41.054 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:26:41.054 Removing: /var/run/dpdk/spdk2/hugepage_info 00:26:41.054 Removing: /var/run/dpdk/spdk3/config 00:26:41.054 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:26:41.054 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:26:41.054 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:26:41.054 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:26:41.054 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:26:41.054 Removing: /var/run/dpdk/spdk3/hugepage_info 00:26:41.054 Removing: /var/run/dpdk/spdk4/config 00:26:41.054 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:26:41.054 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:26:41.054 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:26:41.054 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:26:41.054 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:26:41.054 Removing: /var/run/dpdk/spdk4/hugepage_info 00:26:41.054 Removing: /dev/shm/nvmf_trace.0 00:26:41.054 Removing: /dev/shm/spdk_tgt_trace.pid57151 00:26:41.054 Removing: /var/run/dpdk/spdk0 00:26:41.054 Removing: /var/run/dpdk/spdk1 00:26:41.054 Removing: /var/run/dpdk/spdk2 00:26:41.054 Removing: /var/run/dpdk/spdk3 00:26:41.054 Removing: /var/run/dpdk/spdk4 00:26:41.054 Removing: /var/run/dpdk/spdk_pid56998 00:26:41.054 Removing: /var/run/dpdk/spdk_pid57151 00:26:41.054 Removing: /var/run/dpdk/spdk_pid57355 00:26:41.054 Removing: /var/run/dpdk/spdk_pid57436 00:26:41.054 Removing: /var/run/dpdk/spdk_pid57456 00:26:41.054 Removing: /var/run/dpdk/spdk_pid57571 00:26:41.054 Removing: /var/run/dpdk/spdk_pid57576 00:26:41.054 Removing: /var/run/dpdk/spdk_pid57710 00:26:41.054 Removing: /var/run/dpdk/spdk_pid57911 00:26:41.054 Removing: /var/run/dpdk/spdk_pid58071 00:26:41.054 Removing: /var/run/dpdk/spdk_pid58149 00:26:41.054 Removing: /var/run/dpdk/spdk_pid58225 00:26:41.054 Removing: /var/run/dpdk/spdk_pid58319 00:26:41.054 Removing: /var/run/dpdk/spdk_pid58404 00:26:41.054 Removing: /var/run/dpdk/spdk_pid58437 00:26:41.054 Removing: /var/run/dpdk/spdk_pid58472 00:26:41.054 Removing: /var/run/dpdk/spdk_pid58542 00:26:41.054 Removing: /var/run/dpdk/spdk_pid58623 00:26:41.054 Removing: /var/run/dpdk/spdk_pid59095 00:26:41.054 Removing: /var/run/dpdk/spdk_pid59139 00:26:41.054 Removing: /var/run/dpdk/spdk_pid59177 00:26:41.054 Removing: /var/run/dpdk/spdk_pid59186 00:26:41.054 Removing: /var/run/dpdk/spdk_pid59249 00:26:41.054 Removing: /var/run/dpdk/spdk_pid59258 00:26:41.054 Removing: /var/run/dpdk/spdk_pid59312 00:26:41.054 Removing: /var/run/dpdk/spdk_pid59320 00:26:41.054 Removing: /var/run/dpdk/spdk_pid59366 00:26:41.054 Removing: /var/run/dpdk/spdk_pid59376 00:26:41.054 Removing: /var/run/dpdk/spdk_pid59416 00:26:41.054 Removing: /var/run/dpdk/spdk_pid59427 00:26:41.054 Removing: /var/run/dpdk/spdk_pid59565 00:26:41.054 Removing: /var/run/dpdk/spdk_pid59596 00:26:41.054 Removing: /var/run/dpdk/spdk_pid59674 00:26:41.054 Removing: /var/run/dpdk/spdk_pid60012 00:26:41.054 Removing: /var/run/dpdk/spdk_pid60031 00:26:41.054 Removing: /var/run/dpdk/spdk_pid60062 00:26:41.054 Removing: /var/run/dpdk/spdk_pid60070 00:26:41.054 Removing: /var/run/dpdk/spdk_pid60091 00:26:41.054 Removing: /var/run/dpdk/spdk_pid60110 00:26:41.055 Removing: /var/run/dpdk/spdk_pid60128 00:26:41.055 Removing: /var/run/dpdk/spdk_pid60139 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60158 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60177 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60187 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60206 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60225 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60235 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60254 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60273 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60283 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60302 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60321 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60331 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60367 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60381 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60410 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60477 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60505 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60515 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60543 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60553 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60560 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60603 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60616 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60645 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60649 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60658 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60668 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60677 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60687 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60696 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60709 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60732 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60764 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60768 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60796 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60806 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60813 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60854 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60860 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60892 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60894 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60907 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60909 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60922 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60924 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60937 00:26:41.314 Removing: /var/run/dpdk/spdk_pid60939 00:26:41.314 Removing: /var/run/dpdk/spdk_pid61021 00:26:41.314 Removing: /var/run/dpdk/spdk_pid61063 00:26:41.314 Removing: /var/run/dpdk/spdk_pid61181 00:26:41.314 Removing: /var/run/dpdk/spdk_pid61209 00:26:41.314 Removing: /var/run/dpdk/spdk_pid61254 00:26:41.314 Removing: /var/run/dpdk/spdk_pid61269 00:26:41.314 Removing: /var/run/dpdk/spdk_pid61285 00:26:41.314 Removing: /var/run/dpdk/spdk_pid61305 00:26:41.314 Removing: /var/run/dpdk/spdk_pid61337 00:26:41.314 Removing: /var/run/dpdk/spdk_pid61352 00:26:41.314 Removing: /var/run/dpdk/spdk_pid61430 00:26:41.314 Removing: /var/run/dpdk/spdk_pid61446 00:26:41.314 Removing: /var/run/dpdk/spdk_pid61490 00:26:41.314 Removing: /var/run/dpdk/spdk_pid61563 00:26:41.314 Removing: /var/run/dpdk/spdk_pid61619 00:26:41.314 Removing: /var/run/dpdk/spdk_pid61648 00:26:41.314 Removing: /var/run/dpdk/spdk_pid61742 00:26:41.314 Removing: /var/run/dpdk/spdk_pid61790 00:26:41.314 Removing: /var/run/dpdk/spdk_pid61817 00:26:41.314 Removing: /var/run/dpdk/spdk_pid62049 00:26:41.314 Removing: /var/run/dpdk/spdk_pid62141 00:26:41.314 Removing: /var/run/dpdk/spdk_pid62170 00:26:41.314 Removing: /var/run/dpdk/spdk_pid62199 00:26:41.314 Removing: /var/run/dpdk/spdk_pid62227 00:26:41.314 Removing: /var/run/dpdk/spdk_pid62266 00:26:41.314 Removing: /var/run/dpdk/spdk_pid62294 00:26:41.314 Removing: /var/run/dpdk/spdk_pid62331 00:26:41.314 Removing: /var/run/dpdk/spdk_pid62714 00:26:41.314 Removing: /var/run/dpdk/spdk_pid62752 00:26:41.314 Removing: /var/run/dpdk/spdk_pid63090 00:26:41.314 Removing: /var/run/dpdk/spdk_pid63549 00:26:41.314 Removing: /var/run/dpdk/spdk_pid63830 00:26:41.314 Removing: /var/run/dpdk/spdk_pid64713 00:26:41.314 Removing: /var/run/dpdk/spdk_pid65634 00:26:41.314 Removing: /var/run/dpdk/spdk_pid65752 00:26:41.314 Removing: /var/run/dpdk/spdk_pid65818 00:26:41.314 Removing: /var/run/dpdk/spdk_pid67236 00:26:41.314 Removing: /var/run/dpdk/spdk_pid67549 00:26:41.314 Removing: /var/run/dpdk/spdk_pid71442 00:26:41.314 Removing: /var/run/dpdk/spdk_pid71813 00:26:41.314 Removing: /var/run/dpdk/spdk_pid71924 00:26:41.314 Removing: /var/run/dpdk/spdk_pid72053 00:26:41.314 Removing: /var/run/dpdk/spdk_pid72074 00:26:41.314 Removing: /var/run/dpdk/spdk_pid72095 00:26:41.573 Removing: /var/run/dpdk/spdk_pid72122 00:26:41.573 Removing: /var/run/dpdk/spdk_pid72207 00:26:41.573 Removing: /var/run/dpdk/spdk_pid72335 00:26:41.573 Removing: /var/run/dpdk/spdk_pid72484 00:26:41.573 Removing: /var/run/dpdk/spdk_pid72564 00:26:41.573 Removing: /var/run/dpdk/spdk_pid72756 00:26:41.573 Removing: /var/run/dpdk/spdk_pid72827 00:26:41.573 Removing: /var/run/dpdk/spdk_pid72917 00:26:41.573 Removing: /var/run/dpdk/spdk_pid73276 00:26:41.573 Removing: /var/run/dpdk/spdk_pid73677 00:26:41.573 Removing: /var/run/dpdk/spdk_pid73679 00:26:41.573 Removing: /var/run/dpdk/spdk_pid73682 00:26:41.573 Removing: /var/run/dpdk/spdk_pid73938 00:26:41.573 Removing: /var/run/dpdk/spdk_pid74194 00:26:41.573 Removing: /var/run/dpdk/spdk_pid74577 00:26:41.573 Removing: /var/run/dpdk/spdk_pid74586 00:26:41.573 Removing: /var/run/dpdk/spdk_pid74921 00:26:41.573 Removing: /var/run/dpdk/spdk_pid74935 00:26:41.573 Removing: /var/run/dpdk/spdk_pid74955 00:26:41.573 Removing: /var/run/dpdk/spdk_pid74982 00:26:41.573 Removing: /var/run/dpdk/spdk_pid74991 00:26:41.573 Removing: /var/run/dpdk/spdk_pid75348 00:26:41.573 Removing: /var/run/dpdk/spdk_pid75391 00:26:41.573 Removing: /var/run/dpdk/spdk_pid75729 00:26:41.573 Removing: /var/run/dpdk/spdk_pid75931 00:26:41.573 Removing: /var/run/dpdk/spdk_pid76377 00:26:41.573 Removing: /var/run/dpdk/spdk_pid76920 00:26:41.573 Removing: /var/run/dpdk/spdk_pid77839 00:26:41.573 Removing: /var/run/dpdk/spdk_pid78461 00:26:41.573 Removing: /var/run/dpdk/spdk_pid78463 00:26:41.573 Removing: /var/run/dpdk/spdk_pid80502 00:26:41.573 Removing: /var/run/dpdk/spdk_pid80555 00:26:41.573 Removing: /var/run/dpdk/spdk_pid80614 00:26:41.573 Removing: /var/run/dpdk/spdk_pid80668 00:26:41.573 Removing: /var/run/dpdk/spdk_pid80777 00:26:41.573 Removing: /var/run/dpdk/spdk_pid80824 00:26:41.573 Removing: /var/run/dpdk/spdk_pid80877 00:26:41.573 Removing: /var/run/dpdk/spdk_pid80930 00:26:41.573 Removing: /var/run/dpdk/spdk_pid81292 00:26:41.573 Removing: /var/run/dpdk/spdk_pid82513 00:26:41.573 Removing: /var/run/dpdk/spdk_pid82662 00:26:41.573 Removing: /var/run/dpdk/spdk_pid82888 00:26:41.573 Removing: /var/run/dpdk/spdk_pid83490 00:26:41.573 Removing: /var/run/dpdk/spdk_pid83646 00:26:41.573 Removing: /var/run/dpdk/spdk_pid83805 00:26:41.573 Removing: /var/run/dpdk/spdk_pid83901 00:26:41.573 Removing: /var/run/dpdk/spdk_pid84067 00:26:41.573 Removing: /var/run/dpdk/spdk_pid84172 00:26:41.573 Removing: /var/run/dpdk/spdk_pid84865 00:26:41.573 Removing: /var/run/dpdk/spdk_pid84900 00:26:41.573 Removing: /var/run/dpdk/spdk_pid84940 00:26:41.573 Removing: /var/run/dpdk/spdk_pid85192 00:26:41.573 Removing: /var/run/dpdk/spdk_pid85227 00:26:41.573 Removing: /var/run/dpdk/spdk_pid85257 00:26:41.573 Removing: /var/run/dpdk/spdk_pid85731 00:26:41.573 Removing: /var/run/dpdk/spdk_pid85735 00:26:41.573 Removing: /var/run/dpdk/spdk_pid85991 00:26:41.573 Removing: /var/run/dpdk/spdk_pid86114 00:26:41.573 Removing: /var/run/dpdk/spdk_pid86129 00:26:41.573 Clean 00:26:41.573 05:36:56 -- common/autotest_common.sh@1451 -- # return 0 00:26:41.573 05:36:56 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:26:41.573 05:36:56 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:41.573 05:36:56 -- common/autotest_common.sh@10 -- # set +x 00:26:41.831 05:36:56 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:26:41.831 05:36:56 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:41.831 05:36:56 -- common/autotest_common.sh@10 -- # set +x 00:26:41.831 05:36:56 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:41.831 05:36:56 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:26:41.831 05:36:56 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:26:41.831 05:36:56 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:26:41.831 05:36:56 -- spdk/autotest.sh@394 -- # hostname 00:26:41.831 05:36:56 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:26:42.089 geninfo: WARNING: invalid characters removed from testname! 00:27:14.165 05:37:25 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:14.733 05:37:29 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:18.102 05:37:32 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:20.634 05:37:34 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:23.921 05:37:37 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:26.499 05:37:40 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:29.786 05:37:43 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:29.786 05:37:43 -- spdk/autorun.sh@1 -- $ timing_finish 00:27:29.786 05:37:43 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:27:29.786 05:37:43 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:29.786 05:37:43 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:27:29.786 05:37:43 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:29.786 + [[ -n 5258 ]] 00:27:29.786 + sudo kill 5258 00:27:29.795 [Pipeline] } 00:27:29.810 [Pipeline] // timeout 00:27:29.816 [Pipeline] } 00:27:29.833 [Pipeline] // stage 00:27:29.839 [Pipeline] } 00:27:29.853 [Pipeline] // catchError 00:27:29.863 [Pipeline] stage 00:27:29.865 [Pipeline] { (Stop VM) 00:27:29.878 [Pipeline] sh 00:27:30.158 + vagrant halt 00:27:34.346 ==> default: Halting domain... 00:27:39.629 [Pipeline] sh 00:27:39.907 + vagrant destroy -f 00:27:44.097 ==> default: Removing domain... 00:27:44.109 [Pipeline] sh 00:27:44.467 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/output 00:27:44.476 [Pipeline] } 00:27:44.490 [Pipeline] // stage 00:27:44.495 [Pipeline] } 00:27:44.508 [Pipeline] // dir 00:27:44.513 [Pipeline] } 00:27:44.527 [Pipeline] // wrap 00:27:44.532 [Pipeline] } 00:27:44.544 [Pipeline] // catchError 00:27:44.553 [Pipeline] stage 00:27:44.554 [Pipeline] { (Epilogue) 00:27:44.566 [Pipeline] sh 00:27:44.846 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:51.420 [Pipeline] catchError 00:27:51.422 [Pipeline] { 00:27:51.434 [Pipeline] sh 00:27:51.716 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:51.974 Artifacts sizes are good 00:27:51.984 [Pipeline] } 00:27:51.997 [Pipeline] // catchError 00:27:52.009 [Pipeline] archiveArtifacts 00:27:52.016 Archiving artifacts 00:27:52.174 [Pipeline] cleanWs 00:27:52.220 [WS-CLEANUP] Deleting project workspace... 00:27:52.220 [WS-CLEANUP] Deferred wipeout is used... 00:27:52.227 [WS-CLEANUP] done 00:27:52.229 [Pipeline] } 00:27:52.246 [Pipeline] // stage 00:27:52.252 [Pipeline] } 00:27:52.265 [Pipeline] // node 00:27:52.270 [Pipeline] End of Pipeline 00:27:52.295 Finished: SUCCESS